00:00:00.001 Started by upstream project "autotest-per-patch" build number 126217 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.020 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.021 The recommended git tool is: git 00:00:00.021 using credential 00000000-0000-0000-0000-000000000002 00:00:00.023 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.037 Fetching changes from the remote Git repository 00:00:00.041 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.069 Using shallow fetch with depth 1 00:00:00.069 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.069 > git --version # timeout=10 00:00:00.106 > git --version # 'git version 2.39.2' 00:00:00.106 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.139 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.139 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.628 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.640 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.651 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:02.651 > git config core.sparsecheckout # timeout=10 00:00:02.663 > git read-tree -mu HEAD # timeout=10 00:00:02.680 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:02.703 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:02.703 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:02.806 [Pipeline] Start of Pipeline 00:00:02.819 [Pipeline] library 00:00:02.820 Loading library shm_lib@master 00:00:02.820 Library shm_lib@master is cached. Copying from home. 00:00:02.837 [Pipeline] node 00:00:02.844 Running on WFP12 in /var/jenkins/workspace/crypto-phy-autotest 00:00:02.846 [Pipeline] { 00:00:02.856 [Pipeline] catchError 00:00:02.858 [Pipeline] { 00:00:02.873 [Pipeline] wrap 00:00:02.882 [Pipeline] { 00:00:02.889 [Pipeline] stage 00:00:02.891 [Pipeline] { (Prologue) 00:00:03.076 [Pipeline] sh 00:00:03.360 + logger -p user.info -t JENKINS-CI 00:00:03.376 [Pipeline] echo 00:00:03.377 Node: WFP12 00:00:03.383 [Pipeline] sh 00:00:03.674 [Pipeline] setCustomBuildProperty 00:00:03.686 [Pipeline] echo 00:00:03.688 Cleanup processes 00:00:03.693 [Pipeline] sh 00:00:03.972 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:03.973 2581149 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:03.985 [Pipeline] sh 00:00:04.262 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:04.263 ++ grep -v 'sudo pgrep' 00:00:04.263 ++ awk '{print $1}' 00:00:04.263 + sudo kill -9 00:00:04.263 + true 00:00:04.279 [Pipeline] cleanWs 00:00:04.288 [WS-CLEANUP] Deleting project workspace... 00:00:04.288 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.293 [WS-CLEANUP] done 00:00:04.297 [Pipeline] setCustomBuildProperty 00:00:04.311 [Pipeline] sh 00:00:04.589 + sudo git config --global --replace-all safe.directory '*' 00:00:04.670 [Pipeline] httpRequest 00:00:04.731 [Pipeline] echo 00:00:04.732 Sorcerer 10.211.164.101 is alive 00:00:04.740 [Pipeline] httpRequest 00:00:04.744 HttpMethod: GET 00:00:04.745 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.745 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.747 Response Code: HTTP/1.1 200 OK 00:00:04.748 Success: Status code 200 is in the accepted range: 200,404 00:00:04.748 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.447 [Pipeline] sh 00:00:05.726 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.995 [Pipeline] httpRequest 00:00:06.054 [Pipeline] echo 00:00:06.055 Sorcerer 10.211.164.101 is alive 00:00:06.062 [Pipeline] httpRequest 00:00:06.067 HttpMethod: GET 00:00:06.067 URL: http://10.211.164.101/packages/spdk_bdeef1ed399c7bd878158b1caeed69f1d167a305.tar.gz 00:00:06.068 Sending request to url: http://10.211.164.101/packages/spdk_bdeef1ed399c7bd878158b1caeed69f1d167a305.tar.gz 00:00:06.070 Response Code: HTTP/1.1 200 OK 00:00:06.071 Success: Status code 200 is in the accepted range: 200,404 00:00:06.071 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/spdk_bdeef1ed399c7bd878158b1caeed69f1d167a305.tar.gz 00:00:23.070 [Pipeline] sh 00:00:23.352 + tar --no-same-owner -xf spdk_bdeef1ed399c7bd878158b1caeed69f1d167a305.tar.gz 00:00:27.555 [Pipeline] sh 00:00:27.837 + git -C spdk log --oneline -n5 00:00:27.837 bdeef1ed3 nvmf: add helper function to get a transport poll group 00:00:27.837 2728651ee accel: adjust task per ch define name 00:00:27.837 e7cce062d Examples/Perf: correct the calculation of total bandwidth 00:00:27.837 3b4b1d00c libvfio-user: bump MAX_DMA_REGIONS 00:00:27.837 32a79de81 lib/event: add disable_cpumask_locks to spdk_app_opts 00:00:27.851 [Pipeline] } 00:00:27.872 [Pipeline] // stage 00:00:27.884 [Pipeline] stage 00:00:27.887 [Pipeline] { (Prepare) 00:00:27.916 [Pipeline] writeFile 00:00:27.975 [Pipeline] sh 00:00:28.251 + logger -p user.info -t JENKINS-CI 00:00:28.263 [Pipeline] sh 00:00:28.544 + logger -p user.info -t JENKINS-CI 00:00:28.557 [Pipeline] sh 00:00:28.837 + cat autorun-spdk.conf 00:00:28.838 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.838 SPDK_TEST_BLOCKDEV=1 00:00:28.838 SPDK_TEST_ISAL=1 00:00:28.838 SPDK_TEST_CRYPTO=1 00:00:28.838 SPDK_TEST_REDUCE=1 00:00:28.838 SPDK_TEST_VBDEV_COMPRESS=1 00:00:28.838 SPDK_RUN_UBSAN=1 00:00:28.844 RUN_NIGHTLY=0 00:00:28.850 [Pipeline] readFile 00:00:28.878 [Pipeline] withEnv 00:00:28.881 [Pipeline] { 00:00:28.895 [Pipeline] sh 00:00:29.177 + set -ex 00:00:29.177 + [[ -f /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf ]] 00:00:29.177 + source /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:00:29.177 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.177 ++ SPDK_TEST_BLOCKDEV=1 00:00:29.177 ++ SPDK_TEST_ISAL=1 00:00:29.177 ++ SPDK_TEST_CRYPTO=1 00:00:29.177 ++ SPDK_TEST_REDUCE=1 00:00:29.177 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:00:29.177 ++ SPDK_RUN_UBSAN=1 00:00:29.177 ++ RUN_NIGHTLY=0 00:00:29.177 + case $SPDK_TEST_NVMF_NICS in 00:00:29.177 + DRIVERS= 00:00:29.177 + [[ -n '' ]] 00:00:29.177 + exit 0 00:00:29.188 [Pipeline] } 00:00:29.207 [Pipeline] // withEnv 00:00:29.213 [Pipeline] } 00:00:29.230 [Pipeline] // stage 00:00:29.239 [Pipeline] catchError 00:00:29.240 [Pipeline] { 00:00:29.255 [Pipeline] timeout 00:00:29.255 Timeout set to expire in 40 min 00:00:29.257 [Pipeline] { 00:00:29.273 [Pipeline] stage 00:00:29.275 [Pipeline] { (Tests) 00:00:29.292 [Pipeline] sh 00:00:29.605 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/crypto-phy-autotest 00:00:29.605 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest 00:00:29.605 + DIR_ROOT=/var/jenkins/workspace/crypto-phy-autotest 00:00:29.605 + [[ -n /var/jenkins/workspace/crypto-phy-autotest ]] 00:00:29.605 + DIR_SPDK=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:29.605 + DIR_OUTPUT=/var/jenkins/workspace/crypto-phy-autotest/output 00:00:29.605 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/spdk ]] 00:00:29.605 + [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:00:29.605 + mkdir -p /var/jenkins/workspace/crypto-phy-autotest/output 00:00:29.605 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:00:29.605 + [[ crypto-phy-autotest == pkgdep-* ]] 00:00:29.605 + cd /var/jenkins/workspace/crypto-phy-autotest 00:00:29.605 + source /etc/os-release 00:00:29.605 ++ NAME='Fedora Linux' 00:00:29.605 ++ VERSION='38 (Cloud Edition)' 00:00:29.605 ++ ID=fedora 00:00:29.605 ++ VERSION_ID=38 00:00:29.605 ++ VERSION_CODENAME= 00:00:29.605 ++ PLATFORM_ID=platform:f38 00:00:29.605 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:29.605 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:29.605 ++ LOGO=fedora-logo-icon 00:00:29.605 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:29.605 ++ HOME_URL=https://fedoraproject.org/ 00:00:29.605 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:29.605 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:29.605 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:29.605 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:29.605 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:29.605 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:29.605 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:29.605 ++ SUPPORT_END=2024-05-14 00:00:29.605 ++ VARIANT='Cloud Edition' 00:00:29.605 ++ VARIANT_ID=cloud 00:00:29.605 + uname -a 00:00:29.605 Linux spdk-wfp-12 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:29.605 + sudo /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:00:32.892 Hugepages 00:00:32.892 node hugesize free / total 00:00:32.892 node0 1048576kB 0 / 0 00:00:32.892 node0 2048kB 0 / 0 00:00:32.892 node1 1048576kB 0 / 0 00:00:32.892 node1 2048kB 0 / 0 00:00:32.892 00:00:32.892 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:32.892 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:32.892 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:32.892 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:32.892 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:32.892 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:32.892 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:32.892 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:32.892 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:32.892 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme1 nvme1n1 00:00:32.892 NVMe 0000:5f:00.0 1b96 2600 0 nvme nvme0 nvme0n1 nvme0n2 00:00:32.892 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:32.892 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:32.892 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:32.892 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:32.892 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:32.892 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:32.892 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:32.892 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:32.892 + rm -f /tmp/spdk-ld-path 00:00:32.892 + source autorun-spdk.conf 00:00:32.892 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.892 ++ SPDK_TEST_BLOCKDEV=1 00:00:32.892 ++ SPDK_TEST_ISAL=1 00:00:32.892 ++ SPDK_TEST_CRYPTO=1 00:00:32.892 ++ SPDK_TEST_REDUCE=1 00:00:32.892 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:00:32.892 ++ SPDK_RUN_UBSAN=1 00:00:32.892 ++ RUN_NIGHTLY=0 00:00:32.892 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:32.892 + [[ -n '' ]] 00:00:32.892 + sudo git config --global --add safe.directory /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:32.892 + for M in /var/spdk/build-*-manifest.txt 00:00:32.892 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:32.892 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:00:32.892 + for M in /var/spdk/build-*-manifest.txt 00:00:32.892 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:32.892 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:00:32.892 ++ uname 00:00:32.892 + [[ Linux == \L\i\n\u\x ]] 00:00:32.892 + sudo dmesg -T 00:00:32.892 + sudo dmesg --clear 00:00:32.892 + dmesg_pid=2582729 00:00:32.892 + [[ Fedora Linux == FreeBSD ]] 00:00:32.892 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:32.892 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:32.892 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:32.892 + [[ -x /usr/src/fio-static/fio ]] 00:00:32.892 + export FIO_BIN=/usr/src/fio-static/fio 00:00:32.892 + FIO_BIN=/usr/src/fio-static/fio 00:00:32.892 + sudo dmesg -Tw 00:00:32.892 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\c\r\y\p\t\o\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:32.892 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:32.892 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:32.892 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:32.892 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:32.892 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:32.892 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:32.892 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:32.892 + spdk/autorun.sh /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:00:32.892 Test configuration: 00:00:32.892 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.893 SPDK_TEST_BLOCKDEV=1 00:00:32.893 SPDK_TEST_ISAL=1 00:00:32.893 SPDK_TEST_CRYPTO=1 00:00:32.893 SPDK_TEST_REDUCE=1 00:00:32.893 SPDK_TEST_VBDEV_COMPRESS=1 00:00:32.893 SPDK_RUN_UBSAN=1 00:00:33.151 RUN_NIGHTLY=0 18:15:18 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:00:33.151 18:15:18 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:33.151 18:15:18 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:33.151 18:15:18 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:33.151 18:15:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.151 18:15:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.151 18:15:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.151 18:15:18 -- paths/export.sh@5 -- $ export PATH 00:00:33.151 18:15:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.151 18:15:18 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:00:33.151 18:15:18 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:33.151 18:15:18 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721060118.XXXXXX 00:00:33.151 18:15:18 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721060118.XfDRQE 00:00:33.151 18:15:18 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:33.151 18:15:18 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:33.151 18:15:18 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/' 00:00:33.151 18:15:18 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:33.151 18:15:18 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:33.151 18:15:18 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:33.151 18:15:18 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:33.151 18:15:18 -- common/autotest_common.sh@10 -- $ set +x 00:00:33.151 18:15:18 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk' 00:00:33.151 18:15:18 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:33.151 18:15:18 -- pm/common@17 -- $ local monitor 00:00:33.151 18:15:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.151 18:15:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.151 18:15:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.151 18:15:18 -- pm/common@21 -- $ date +%s 00:00:33.151 18:15:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.151 18:15:18 -- pm/common@21 -- $ date +%s 00:00:33.151 18:15:18 -- pm/common@25 -- $ sleep 1 00:00:33.151 18:15:18 -- pm/common@21 -- $ date +%s 00:00:33.151 18:15:18 -- pm/common@21 -- $ date +%s 00:00:33.151 18:15:18 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721060118 00:00:33.151 18:15:18 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721060118 00:00:33.151 18:15:18 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721060118 00:00:33.152 18:15:18 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721060118 00:00:33.152 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721060118_collect-vmstat.pm.log 00:00:33.152 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721060118_collect-cpu-load.pm.log 00:00:33.152 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721060118_collect-cpu-temp.pm.log 00:00:33.152 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721060118_collect-bmc-pm.bmc.pm.log 00:00:34.087 18:15:19 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:34.087 18:15:19 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:34.087 18:15:19 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:34.087 18:15:19 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:34.087 18:15:19 -- spdk/autobuild.sh@16 -- $ date -u 00:00:34.087 Mon Jul 15 04:15:19 PM UTC 2024 00:00:34.087 18:15:19 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:34.087 v24.09-pre-207-gbdeef1ed3 00:00:34.087 18:15:19 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:34.087 18:15:19 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:34.087 18:15:19 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:34.087 18:15:19 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:34.087 18:15:19 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:34.087 18:15:19 -- common/autotest_common.sh@10 -- $ set +x 00:00:34.087 ************************************ 00:00:34.087 START TEST ubsan 00:00:34.087 ************************************ 00:00:34.087 18:15:19 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:34.087 using ubsan 00:00:34.087 00:00:34.087 real 0m0.000s 00:00:34.087 user 0m0.000s 00:00:34.087 sys 0m0.000s 00:00:34.087 18:15:19 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:34.087 18:15:19 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:34.087 ************************************ 00:00:34.087 END TEST ubsan 00:00:34.087 ************************************ 00:00:34.087 18:15:19 -- common/autotest_common.sh@1142 -- $ return 0 00:00:34.087 18:15:19 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:34.087 18:15:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:34.087 18:15:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:34.087 18:15:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:34.087 18:15:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:34.087 18:15:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:34.087 18:15:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:34.087 18:15:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:34.087 18:15:19 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk --with-shared 00:00:34.346 Using default SPDK env in /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:00:34.346 Using default DPDK in /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:00:34.604 Using 'verbs' RDMA provider 00:00:50.879 Configuring ISA-L (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:03.085 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:03.085 Creating mk/config.mk...done. 00:01:03.085 Creating mk/cc.flags.mk...done. 00:01:03.085 Type 'make' to build. 00:01:03.085 18:15:48 -- spdk/autobuild.sh@69 -- $ run_test make make -j88 00:01:03.085 18:15:48 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:03.085 18:15:48 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:03.085 18:15:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:03.085 ************************************ 00:01:03.085 START TEST make 00:01:03.085 ************************************ 00:01:03.085 18:15:48 make -- common/autotest_common.sh@1123 -- $ make -j88 00:01:03.085 make[1]: Nothing to be done for 'all'. 00:01:41.890 The Meson build system 00:01:41.890 Version: 1.3.1 00:01:41.890 Source dir: /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk 00:01:41.890 Build dir: /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp 00:01:41.890 Build type: native build 00:01:41.890 Program cat found: YES (/usr/bin/cat) 00:01:41.890 Project name: DPDK 00:01:41.890 Project version: 24.03.0 00:01:41.890 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:41.890 C linker for the host machine: cc ld.bfd 2.39-16 00:01:41.890 Host machine cpu family: x86_64 00:01:41.890 Host machine cpu: x86_64 00:01:41.890 Message: ## Building in Developer Mode ## 00:01:41.890 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:41.890 Program check-symbols.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:41.890 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:41.890 Program python3 found: YES (/usr/bin/python3) 00:01:41.890 Program cat found: YES (/usr/bin/cat) 00:01:41.890 Compiler for C supports arguments -march=native: YES 00:01:41.890 Checking for size of "void *" : 8 00:01:41.890 Checking for size of "void *" : 8 (cached) 00:01:41.890 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:41.890 Library m found: YES 00:01:41.890 Library numa found: YES 00:01:41.890 Has header "numaif.h" : YES 00:01:41.890 Library fdt found: NO 00:01:41.890 Library execinfo found: NO 00:01:41.890 Has header "execinfo.h" : YES 00:01:41.890 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:41.890 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:41.890 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:41.890 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:41.890 Run-time dependency openssl found: YES 3.0.9 00:01:41.890 Run-time dependency libpcap found: YES 1.10.4 00:01:41.890 Has header "pcap.h" with dependency libpcap: YES 00:01:41.890 Compiler for C supports arguments -Wcast-qual: YES 00:01:41.890 Compiler for C supports arguments -Wdeprecated: YES 00:01:41.890 Compiler for C supports arguments -Wformat: YES 00:01:41.890 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:41.890 Compiler for C supports arguments -Wformat-security: NO 00:01:41.890 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:41.890 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:41.890 Compiler for C supports arguments -Wnested-externs: YES 00:01:41.890 Compiler for C supports arguments -Wold-style-definition: YES 00:01:41.890 Compiler for C supports arguments -Wpointer-arith: YES 00:01:41.890 Compiler for C supports arguments -Wsign-compare: YES 00:01:41.890 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:41.890 Compiler for C supports arguments -Wundef: YES 00:01:41.890 Compiler for C supports arguments -Wwrite-strings: YES 00:01:41.890 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:41.890 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:41.890 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:41.890 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:41.890 Program objdump found: YES (/usr/bin/objdump) 00:01:41.890 Compiler for C supports arguments -mavx512f: YES 00:01:41.890 Checking if "AVX512 checking" compiles: YES 00:01:41.890 Fetching value of define "__SSE4_2__" : 1 00:01:41.890 Fetching value of define "__AES__" : 1 00:01:41.890 Fetching value of define "__AVX__" : 1 00:01:41.890 Fetching value of define "__AVX2__" : 1 00:01:41.890 Fetching value of define "__AVX512BW__" : 1 00:01:41.890 Fetching value of define "__AVX512CD__" : 1 00:01:41.890 Fetching value of define "__AVX512DQ__" : 1 00:01:41.890 Fetching value of define "__AVX512F__" : 1 00:01:41.890 Fetching value of define "__AVX512VL__" : 1 00:01:41.890 Fetching value of define "__PCLMUL__" : 1 00:01:41.890 Fetching value of define "__RDRND__" : 1 00:01:41.890 Fetching value of define "__RDSEED__" : 1 00:01:41.890 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:41.890 Fetching value of define "__znver1__" : (undefined) 00:01:41.890 Fetching value of define "__znver2__" : (undefined) 00:01:41.890 Fetching value of define "__znver3__" : (undefined) 00:01:41.890 Fetching value of define "__znver4__" : (undefined) 00:01:41.890 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:41.890 Message: lib/log: Defining dependency "log" 00:01:41.890 Message: lib/kvargs: Defining dependency "kvargs" 00:01:41.890 Message: lib/telemetry: Defining dependency "telemetry" 00:01:41.890 Checking for function "getentropy" : NO 00:01:41.890 Message: lib/eal: Defining dependency "eal" 00:01:41.890 Message: lib/ring: Defining dependency "ring" 00:01:41.890 Message: lib/rcu: Defining dependency "rcu" 00:01:41.890 Message: lib/mempool: Defining dependency "mempool" 00:01:41.890 Message: lib/mbuf: Defining dependency "mbuf" 00:01:41.890 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:41.890 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:41.890 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:41.890 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:41.890 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:41.890 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:41.890 Compiler for C supports arguments -mpclmul: YES 00:01:41.890 Compiler for C supports arguments -maes: YES 00:01:41.890 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:41.890 Compiler for C supports arguments -mavx512bw: YES 00:01:41.890 Compiler for C supports arguments -mavx512dq: YES 00:01:41.890 Compiler for C supports arguments -mavx512vl: YES 00:01:41.890 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:41.890 Compiler for C supports arguments -mavx2: YES 00:01:41.890 Compiler for C supports arguments -mavx: YES 00:01:41.890 Message: lib/net: Defining dependency "net" 00:01:41.890 Message: lib/meter: Defining dependency "meter" 00:01:41.890 Message: lib/ethdev: Defining dependency "ethdev" 00:01:41.890 Message: lib/pci: Defining dependency "pci" 00:01:41.890 Message: lib/cmdline: Defining dependency "cmdline" 00:01:41.890 Message: lib/hash: Defining dependency "hash" 00:01:41.890 Message: lib/timer: Defining dependency "timer" 00:01:41.890 Message: lib/compressdev: Defining dependency "compressdev" 00:01:41.890 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:41.890 Message: lib/dmadev: Defining dependency "dmadev" 00:01:41.890 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:41.890 Message: lib/power: Defining dependency "power" 00:01:41.890 Message: lib/reorder: Defining dependency "reorder" 00:01:41.890 Message: lib/security: Defining dependency "security" 00:01:41.890 Has header "linux/userfaultfd.h" : YES 00:01:41.890 Has header "linux/vduse.h" : YES 00:01:41.890 Message: lib/vhost: Defining dependency "vhost" 00:01:41.890 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:41.890 Message: drivers/bus/auxiliary: Defining dependency "bus_auxiliary" 00:01:41.890 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:41.890 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:41.890 Compiler for C supports arguments -std=c11: YES 00:01:41.890 Compiler for C supports arguments -Wno-strict-prototypes: YES 00:01:41.890 Compiler for C supports arguments -D_BSD_SOURCE: YES 00:01:41.890 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES 00:01:41.890 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES 00:01:41.890 Run-time dependency libmlx5 found: YES 1.24.44.0 00:01:41.890 Run-time dependency libibverbs found: YES 1.14.44.0 00:01:41.890 Library mtcr_ul found: NO 00:01:41.890 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_ESP" with dependencies libmlx5, libibverbs: YES 00:01:41.890 Header "infiniband/verbs.h" has symbol "IBV_RX_HASH_IPSEC_SPI" with dependencies libmlx5, libibverbs: YES 00:01:41.890 Header "infiniband/verbs.h" has symbol "IBV_ACCESS_RELAXED_ORDERING " with dependencies libmlx5, libibverbs: YES 00:01:41.890 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQE_RES_FORMAT_CSUM_STRIDX" with dependencies libmlx5, libibverbs: YES 00:01:41.890 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_MASK_TUNNEL_OFFLOADS" with dependencies libmlx5, libibverbs: YES 00:01:41.890 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED" with dependencies libmlx5, libibverbs: YES 00:01:41.891 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_CQE_128B_COMP" with dependencies libmlx5, libibverbs: YES 00:01:41.891 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD" with dependencies libmlx5, libibverbs: YES 00:01:41.891 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_flow_action_packet_reformat" with dependencies libmlx5, libibverbs: YES 00:01:41.891 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_MPLS" with dependencies libmlx5, libibverbs: YES 00:01:41.891 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAGS_PCI_WRITE_END_PADDING" with dependencies libmlx5, libibverbs: YES 00:01:41.891 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAG_RX_END_PADDING" with dependencies libmlx5, libibverbs: NO 00:01:41.891 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_devx_port" with dependencies libmlx5, libibverbs: NO 00:01:41.891 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_port" with dependencies libmlx5, libibverbs: YES 00:01:41.891 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_ib_port" with dependencies libmlx5, libibverbs: YES 00:01:46.080 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_create" with dependencies libmlx5, libibverbs: YES 00:01:46.080 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_COUNTERS_DEVX" with dependencies libmlx5, libibverbs: YES 00:01:46.080 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_DEFAULT_MISS" with dependencies libmlx5, libibverbs: YES 00:01:46.080 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_query_async" with dependencies libmlx5, libibverbs: YES 00:01:46.080 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_qp_query" with dependencies libmlx5, libibverbs: YES 00:01:46.080 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_pp_alloc" with dependencies libmlx5, libibverbs: YES 00:01:46.080 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_devx_tir" with dependencies libmlx5, libibverbs: YES 00:01:46.080 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_get_event" with dependencies libmlx5, libibverbs: YES 00:01:46.080 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_meter" with dependencies libmlx5, libibverbs: YES 00:01:46.080 Header "infiniband/mlx5dv.h" has symbol "MLX5_MMAP_GET_NC_PAGES_CMD" with dependencies libmlx5, libibverbs: YES 00:01:46.080 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_NIC_RX" with dependencies libmlx5, libibverbs: YES 00:01:46.080 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_FDB" with dependencies libmlx5, libibverbs: YES 00:01:46.080 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_push_vlan" with dependencies libmlx5, libibverbs: YES 00:01:46.080 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_alloc_var" with dependencies libmlx5, libibverbs: YES 00:01:46.080 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ENHANCED_MPSW" with dependencies libmlx5, libibverbs: NO 00:01:46.080 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_SEND_EN" with dependencies libmlx5, libibverbs: NO 00:01:46.080 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_WAIT" with dependencies libmlx5, libibverbs: NO 00:01:46.080 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ACCESS_ASO" with dependencies libmlx5, libibverbs: NO 00:01:46.080 Header "linux/if_link.h" has symbol "IFLA_NUM_VF" with dependencies libmlx5, libibverbs: YES 00:01:46.080 Header "linux/if_link.h" has symbol "IFLA_EXT_MASK" with dependencies libmlx5, libibverbs: YES 00:01:46.080 Header "linux/if_link.h" has symbol "IFLA_PHYS_SWITCH_ID" with dependencies libmlx5, libibverbs: YES 00:01:46.080 Header "linux/if_link.h" has symbol "IFLA_PHYS_PORT_NAME" with dependencies libmlx5, libibverbs: YES 00:01:46.080 Header "rdma/rdma_netlink.h" has symbol "RDMA_NL_NLDEV" with dependencies libmlx5, libibverbs: YES 00:01:46.080 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_GET" with dependencies libmlx5, libibverbs: YES 00:01:46.080 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_PORT_GET" with dependencies libmlx5, libibverbs: YES 00:01:46.080 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:01:46.080 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_NAME" with dependencies libmlx5, libibverbs: YES 00:01:46.081 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_INDEX" with dependencies libmlx5, libibverbs: YES 00:01:46.081 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_STATE" with dependencies libmlx5, libibverbs: YES 00:01:46.081 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_NDEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:01:46.081 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_domain" with dependencies libmlx5, libibverbs: YES 00:01:46.081 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_sampler" with dependencies libmlx5, libibverbs: YES 00:01:46.081 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_set_reclaim_device_memory" with dependencies libmlx5, libibverbs: YES 00:01:46.081 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_array" with dependencies libmlx5, libibverbs: YES 00:01:46.081 Header "linux/devlink.h" has symbol "DEVLINK_GENL_NAME" with dependencies libmlx5, libibverbs: YES 00:01:46.081 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_aso" with dependencies libmlx5, libibverbs: YES 00:01:46.081 Header "infiniband/verbs.h" has symbol "INFINIBAND_VERBS_H" with dependencies libmlx5, libibverbs: YES 00:01:46.081 Header "infiniband/mlx5dv.h" has symbol "MLX5_WQE_UMR_CTRL_FLAG_INLINE" with dependencies libmlx5, libibverbs: YES 00:01:46.081 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_rule" with dependencies libmlx5, libibverbs: YES 00:01:46.081 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_ACTION_FLAGS_ASO_CT_DIRECTION_INITIATOR" with dependencies libmlx5, libibverbs: YES 00:01:46.081 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_allow_duplicate_rules" with dependencies libmlx5, libibverbs: YES 00:01:46.081 Header "infiniband/verbs.h" has symbol "ibv_reg_mr_iova" with dependencies libmlx5, libibverbs: YES 00:01:46.081 Header "infiniband/verbs.h" has symbol "ibv_import_device" with dependencies libmlx5, libibverbs: YES 00:01:46.081 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_root_table" with dependencies libmlx5, libibverbs: YES 00:01:46.081 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_steering_anchor" with dependencies libmlx5, libibverbs: YES 00:01:46.081 Header "infiniband/verbs.h" has symbol "ibv_is_fork_initialized" with dependencies libmlx5, libibverbs: YES 00:01:46.081 Checking whether type "struct mlx5dv_sw_parsing_caps" has member "sw_parsing_offloads" with dependencies libmlx5, libibverbs: YES 00:01:46.081 Checking whether type "struct ibv_counter_set_init_attr" has member "counter_set_id" with dependencies libmlx5, libibverbs: NO 00:01:46.081 Checking whether type "struct ibv_counters_init_attr" has member "comp_mask" with dependencies libmlx5, libibverbs: YES 00:01:46.081 Checking whether type "struct mlx5dv_devx_uar" has member "mmap_off" with dependencies libmlx5, libibverbs: YES 00:01:46.081 Checking whether type "struct mlx5dv_flow_matcher_attr" has member "ft_type" with dependencies libmlx5, libibverbs: YES 00:01:46.081 Configuring mlx5_autoconf.h using configuration 00:01:46.081 Message: drivers/common/mlx5: Defining dependency "common_mlx5" 00:01:46.081 Run-time dependency libcrypto found: YES 3.0.9 00:01:46.081 Library IPSec_MB found: YES 00:01:46.081 Fetching value of define "IMB_VERSION_STR" : "1.5.0" 00:01:46.081 Message: drivers/common/qat: Defining dependency "common_qat" 00:01:46.081 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:46.081 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:46.081 Library IPSec_MB found: YES 00:01:46.081 Fetching value of define "IMB_VERSION_STR" : "1.5.0" (cached) 00:01:46.081 Message: drivers/crypto/ipsec_mb: Defining dependency "crypto_ipsec_mb" 00:01:46.081 Compiler for C supports arguments -std=c11: YES (cached) 00:01:46.081 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:01:46.081 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:01:46.081 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:01:46.081 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:01:46.081 Message: drivers/crypto/mlx5: Defining dependency "crypto_mlx5" 00:01:46.081 Run-time dependency libisal found: NO (tried pkgconfig) 00:01:46.081 Library libisal found: NO 00:01:46.081 Message: drivers/compress/isal: Defining dependency "compress_isal" 00:01:46.081 Compiler for C supports arguments -std=c11: YES (cached) 00:01:46.081 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:01:46.081 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:01:46.081 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:01:46.081 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:01:46.081 Message: drivers/compress/mlx5: Defining dependency "compress_mlx5" 00:01:46.081 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:46.081 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:46.081 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:46.081 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:46.081 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:46.081 Program doxygen found: YES (/usr/bin/doxygen) 00:01:46.081 Configuring doxy-api-html.conf using configuration 00:01:46.081 Configuring doxy-api-man.conf using configuration 00:01:46.081 Program mandb found: YES (/usr/bin/mandb) 00:01:46.081 Program sphinx-build found: NO 00:01:46.081 Configuring rte_build_config.h using configuration 00:01:46.081 Message: 00:01:46.081 ================= 00:01:46.081 Applications Enabled 00:01:46.081 ================= 00:01:46.081 00:01:46.081 apps: 00:01:46.081 00:01:46.081 00:01:46.081 Message: 00:01:46.081 ================= 00:01:46.081 Libraries Enabled 00:01:46.081 ================= 00:01:46.081 00:01:46.081 libs: 00:01:46.081 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:46.081 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:46.081 cryptodev, dmadev, power, reorder, security, vhost, 00:01:46.081 00:01:46.081 Message: 00:01:46.081 =============== 00:01:46.081 Drivers Enabled 00:01:46.081 =============== 00:01:46.081 00:01:46.081 common: 00:01:46.081 mlx5, qat, 00:01:46.081 bus: 00:01:46.081 auxiliary, pci, vdev, 00:01:46.081 mempool: 00:01:46.081 ring, 00:01:46.081 dma: 00:01:46.081 00:01:46.081 net: 00:01:46.081 00:01:46.081 crypto: 00:01:46.081 ipsec_mb, mlx5, 00:01:46.081 compress: 00:01:46.081 isal, mlx5, 00:01:46.081 vdpa: 00:01:46.081 00:01:46.081 00:01:46.081 Message: 00:01:46.081 ================= 00:01:46.081 Content Skipped 00:01:46.081 ================= 00:01:46.081 00:01:46.081 apps: 00:01:46.081 dumpcap: explicitly disabled via build config 00:01:46.081 graph: explicitly disabled via build config 00:01:46.081 pdump: explicitly disabled via build config 00:01:46.081 proc-info: explicitly disabled via build config 00:01:46.081 test-acl: explicitly disabled via build config 00:01:46.081 test-bbdev: explicitly disabled via build config 00:01:46.081 test-cmdline: explicitly disabled via build config 00:01:46.081 test-compress-perf: explicitly disabled via build config 00:01:46.081 test-crypto-perf: explicitly disabled via build config 00:01:46.081 test-dma-perf: explicitly disabled via build config 00:01:46.081 test-eventdev: explicitly disabled via build config 00:01:46.081 test-fib: explicitly disabled via build config 00:01:46.081 test-flow-perf: explicitly disabled via build config 00:01:46.081 test-gpudev: explicitly disabled via build config 00:01:46.081 test-mldev: explicitly disabled via build config 00:01:46.081 test-pipeline: explicitly disabled via build config 00:01:46.081 test-pmd: explicitly disabled via build config 00:01:46.081 test-regex: explicitly disabled via build config 00:01:46.081 test-sad: explicitly disabled via build config 00:01:46.081 test-security-perf: explicitly disabled via build config 00:01:46.081 00:01:46.081 libs: 00:01:46.081 argparse: explicitly disabled via build config 00:01:46.081 metrics: explicitly disabled via build config 00:01:46.081 acl: explicitly disabled via build config 00:01:46.081 bbdev: explicitly disabled via build config 00:01:46.081 bitratestats: explicitly disabled via build config 00:01:46.081 bpf: explicitly disabled via build config 00:01:46.081 cfgfile: explicitly disabled via build config 00:01:46.081 distributor: explicitly disabled via build config 00:01:46.081 efd: explicitly disabled via build config 00:01:46.081 eventdev: explicitly disabled via build config 00:01:46.081 dispatcher: explicitly disabled via build config 00:01:46.081 gpudev: explicitly disabled via build config 00:01:46.081 gro: explicitly disabled via build config 00:01:46.081 gso: explicitly disabled via build config 00:01:46.081 ip_frag: explicitly disabled via build config 00:01:46.081 jobstats: explicitly disabled via build config 00:01:46.081 latencystats: explicitly disabled via build config 00:01:46.081 lpm: explicitly disabled via build config 00:01:46.081 member: explicitly disabled via build config 00:01:46.081 pcapng: explicitly disabled via build config 00:01:46.081 rawdev: explicitly disabled via build config 00:01:46.081 regexdev: explicitly disabled via build config 00:01:46.081 mldev: explicitly disabled via build config 00:01:46.081 rib: explicitly disabled via build config 00:01:46.081 sched: explicitly disabled via build config 00:01:46.081 stack: explicitly disabled via build config 00:01:46.081 ipsec: explicitly disabled via build config 00:01:46.081 pdcp: explicitly disabled via build config 00:01:46.081 fib: explicitly disabled via build config 00:01:46.081 port: explicitly disabled via build config 00:01:46.081 pdump: explicitly disabled via build config 00:01:46.081 table: explicitly disabled via build config 00:01:46.081 pipeline: explicitly disabled via build config 00:01:46.081 graph: explicitly disabled via build config 00:01:46.081 node: explicitly disabled via build config 00:01:46.081 00:01:46.081 drivers: 00:01:46.081 common/cpt: not in enabled drivers build config 00:01:46.081 common/dpaax: not in enabled drivers build config 00:01:46.081 common/iavf: not in enabled drivers build config 00:01:46.081 common/idpf: not in enabled drivers build config 00:01:46.081 common/ionic: not in enabled drivers build config 00:01:46.081 common/mvep: not in enabled drivers build config 00:01:46.081 common/octeontx: not in enabled drivers build config 00:01:46.081 bus/cdx: not in enabled drivers build config 00:01:46.081 bus/dpaa: not in enabled drivers build config 00:01:46.081 bus/fslmc: not in enabled drivers build config 00:01:46.081 bus/ifpga: not in enabled drivers build config 00:01:46.081 bus/platform: not in enabled drivers build config 00:01:46.081 bus/uacce: not in enabled drivers build config 00:01:46.081 bus/vmbus: not in enabled drivers build config 00:01:46.081 common/cnxk: not in enabled drivers build config 00:01:46.081 common/nfp: not in enabled drivers build config 00:01:46.081 common/nitrox: not in enabled drivers build config 00:01:46.081 common/sfc_efx: not in enabled drivers build config 00:01:46.081 mempool/bucket: not in enabled drivers build config 00:01:46.081 mempool/cnxk: not in enabled drivers build config 00:01:46.081 mempool/dpaa: not in enabled drivers build config 00:01:46.081 mempool/dpaa2: not in enabled drivers build config 00:01:46.081 mempool/octeontx: not in enabled drivers build config 00:01:46.081 mempool/stack: not in enabled drivers build config 00:01:46.081 dma/cnxk: not in enabled drivers build config 00:01:46.081 dma/dpaa: not in enabled drivers build config 00:01:46.081 dma/dpaa2: not in enabled drivers build config 00:01:46.081 dma/hisilicon: not in enabled drivers build config 00:01:46.081 dma/idxd: not in enabled drivers build config 00:01:46.082 dma/ioat: not in enabled drivers build config 00:01:46.082 dma/skeleton: not in enabled drivers build config 00:01:46.082 net/af_packet: not in enabled drivers build config 00:01:46.082 net/af_xdp: not in enabled drivers build config 00:01:46.082 net/ark: not in enabled drivers build config 00:01:46.082 net/atlantic: not in enabled drivers build config 00:01:46.082 net/avp: not in enabled drivers build config 00:01:46.082 net/axgbe: not in enabled drivers build config 00:01:46.082 net/bnx2x: not in enabled drivers build config 00:01:46.082 net/bnxt: not in enabled drivers build config 00:01:46.082 net/bonding: not in enabled drivers build config 00:01:46.082 net/cnxk: not in enabled drivers build config 00:01:46.082 net/cpfl: not in enabled drivers build config 00:01:46.082 net/cxgbe: not in enabled drivers build config 00:01:46.082 net/dpaa: not in enabled drivers build config 00:01:46.082 net/dpaa2: not in enabled drivers build config 00:01:46.082 net/e1000: not in enabled drivers build config 00:01:46.082 net/ena: not in enabled drivers build config 00:01:46.082 net/enetc: not in enabled drivers build config 00:01:46.082 net/enetfec: not in enabled drivers build config 00:01:46.082 net/enic: not in enabled drivers build config 00:01:46.082 net/failsafe: not in enabled drivers build config 00:01:46.082 net/fm10k: not in enabled drivers build config 00:01:46.082 net/gve: not in enabled drivers build config 00:01:46.082 net/hinic: not in enabled drivers build config 00:01:46.082 net/hns3: not in enabled drivers build config 00:01:46.082 net/i40e: not in enabled drivers build config 00:01:46.082 net/iavf: not in enabled drivers build config 00:01:46.082 net/ice: not in enabled drivers build config 00:01:46.082 net/idpf: not in enabled drivers build config 00:01:46.082 net/igc: not in enabled drivers build config 00:01:46.082 net/ionic: not in enabled drivers build config 00:01:46.082 net/ipn3ke: not in enabled drivers build config 00:01:46.082 net/ixgbe: not in enabled drivers build config 00:01:46.082 net/mana: not in enabled drivers build config 00:01:46.082 net/memif: not in enabled drivers build config 00:01:46.082 net/mlx4: not in enabled drivers build config 00:01:46.082 net/mlx5: not in enabled drivers build config 00:01:46.082 net/mvneta: not in enabled drivers build config 00:01:46.082 net/mvpp2: not in enabled drivers build config 00:01:46.082 net/netvsc: not in enabled drivers build config 00:01:46.082 net/nfb: not in enabled drivers build config 00:01:46.082 net/nfp: not in enabled drivers build config 00:01:46.082 net/ngbe: not in enabled drivers build config 00:01:46.082 net/null: not in enabled drivers build config 00:01:46.082 net/octeontx: not in enabled drivers build config 00:01:46.082 net/octeon_ep: not in enabled drivers build config 00:01:46.082 net/pcap: not in enabled drivers build config 00:01:46.082 net/pfe: not in enabled drivers build config 00:01:46.082 net/qede: not in enabled drivers build config 00:01:46.082 net/ring: not in enabled drivers build config 00:01:46.082 net/sfc: not in enabled drivers build config 00:01:46.082 net/softnic: not in enabled drivers build config 00:01:46.082 net/tap: not in enabled drivers build config 00:01:46.082 net/thunderx: not in enabled drivers build config 00:01:46.082 net/txgbe: not in enabled drivers build config 00:01:46.082 net/vdev_netvsc: not in enabled drivers build config 00:01:46.082 net/vhost: not in enabled drivers build config 00:01:46.082 net/virtio: not in enabled drivers build config 00:01:46.082 net/vmxnet3: not in enabled drivers build config 00:01:46.082 raw/*: missing internal dependency, "rawdev" 00:01:46.082 crypto/armv8: not in enabled drivers build config 00:01:46.082 crypto/bcmfs: not in enabled drivers build config 00:01:46.082 crypto/caam_jr: not in enabled drivers build config 00:01:46.082 crypto/ccp: not in enabled drivers build config 00:01:46.082 crypto/cnxk: not in enabled drivers build config 00:01:46.082 crypto/dpaa_sec: not in enabled drivers build config 00:01:46.082 crypto/dpaa2_sec: not in enabled drivers build config 00:01:46.082 crypto/mvsam: not in enabled drivers build config 00:01:46.082 crypto/nitrox: not in enabled drivers build config 00:01:46.082 crypto/null: not in enabled drivers build config 00:01:46.082 crypto/octeontx: not in enabled drivers build config 00:01:46.082 crypto/openssl: not in enabled drivers build config 00:01:46.082 crypto/scheduler: not in enabled drivers build config 00:01:46.082 crypto/uadk: not in enabled drivers build config 00:01:46.082 crypto/virtio: not in enabled drivers build config 00:01:46.082 compress/nitrox: not in enabled drivers build config 00:01:46.082 compress/octeontx: not in enabled drivers build config 00:01:46.082 compress/zlib: not in enabled drivers build config 00:01:46.082 regex/*: missing internal dependency, "regexdev" 00:01:46.082 ml/*: missing internal dependency, "mldev" 00:01:46.082 vdpa/ifc: not in enabled drivers build config 00:01:46.082 vdpa/mlx5: not in enabled drivers build config 00:01:46.082 vdpa/nfp: not in enabled drivers build config 00:01:46.082 vdpa/sfc: not in enabled drivers build config 00:01:46.082 event/*: missing internal dependency, "eventdev" 00:01:46.082 baseband/*: missing internal dependency, "bbdev" 00:01:46.082 gpu/*: missing internal dependency, "gpudev" 00:01:46.082 00:01:46.082 00:01:46.647 Build targets in project: 115 00:01:46.647 00:01:46.647 DPDK 24.03.0 00:01:46.647 00:01:46.647 User defined options 00:01:46.647 buildtype : debug 00:01:46.647 default_library : shared 00:01:46.647 libdir : lib 00:01:46.647 prefix : /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:01:46.647 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -I/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib -DNO_COMPAT_IMB_API_053 -I/var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l -I/var/jenkins/workspace/crypto-phy-autotest/spdk/isalbuild -fPIC -Werror 00:01:46.647 c_link_args : -L/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib -L/var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l/.libs -lisal 00:01:46.647 cpu_instruction_set: native 00:01:46.647 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:46.647 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:46.647 enable_docs : false 00:01:46.647 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,crypto/qat,compress/qat,common/qat,common/mlx5,bus/auxiliary,crypto,crypto/aesni_mb,crypto/mlx5,crypto/ipsec_mb,compress,compress/isal,compress/mlx5 00:01:46.647 enable_kmods : false 00:01:46.647 max_lcores : 128 00:01:46.647 tests : false 00:01:46.647 00:01:46.647 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:47.238 ninja: Entering directory `/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp' 00:01:47.238 [1/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:47.238 [2/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:47.238 [3/378] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:47.504 [4/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:47.504 [5/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:47.504 [6/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:47.504 [7/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:47.504 [8/378] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:47.504 [9/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:47.504 [10/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:47.504 [11/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:47.504 [12/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:47.504 [13/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:47.504 [14/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:47.504 [15/378] Linking static target lib/librte_kvargs.a 00:01:47.504 [16/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:47.504 [17/378] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:47.504 [18/378] Linking static target lib/librte_log.a 00:01:47.764 [19/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:47.764 [20/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:48.030 [21/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:48.030 [22/378] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.030 [23/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:48.030 [24/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:48.030 [25/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:48.030 [26/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:48.030 [27/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:48.030 [28/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:48.030 [29/378] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:48.030 [30/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:48.030 [31/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:48.030 [32/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:48.030 [33/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:48.030 [34/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:48.030 [35/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:48.030 [36/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:48.030 [37/378] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:48.030 [38/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:48.030 [39/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:48.030 [40/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:48.030 [41/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:48.030 [42/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:48.030 [43/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:48.030 [44/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:48.030 [45/378] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:48.030 [46/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:48.030 [47/378] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:48.030 [48/378] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:48.030 [49/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:48.030 [50/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:48.030 [51/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:48.030 [52/378] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:48.030 [53/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:48.030 [54/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:48.030 [55/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:48.030 [56/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:48.030 [57/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:48.030 [58/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:48.030 [59/378] Linking static target lib/librte_pci.a 00:01:48.030 [60/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:48.030 [61/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:48.030 [62/378] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:48.030 [63/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:48.030 [64/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:48.030 [65/378] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:48.030 [66/378] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:48.030 [67/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:48.030 [68/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:48.030 [69/378] Linking static target lib/librte_ring.a 00:01:48.030 [70/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:48.030 [71/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:48.030 [72/378] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:48.030 [73/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:48.030 [74/378] Linking static target lib/librte_telemetry.a 00:01:48.030 [75/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:48.030 [76/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:48.030 [77/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:48.030 [78/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:48.030 [79/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:48.030 [80/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:48.030 [81/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:48.030 [82/378] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:48.291 [83/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:48.291 [84/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:48.291 [85/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:48.291 [86/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:48.291 [87/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:48.291 [88/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:48.291 [89/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:48.291 [90/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:48.291 [91/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:48.291 [92/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:48.291 [93/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:48.291 [94/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:48.291 [95/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:48.291 [96/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:48.291 [97/378] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:48.291 [98/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:48.291 [99/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:48.291 [100/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:48.291 [101/378] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:48.291 [102/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:48.291 [103/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:48.291 [104/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:48.291 [105/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:48.291 [106/378] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:48.291 [107/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:48.291 [108/378] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:48.291 [109/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:48.291 [110/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_params.c.o 00:01:48.291 [111/378] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:48.291 [112/378] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:48.291 [113/378] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:48.291 [114/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:48.291 [115/378] Linking static target lib/librte_rcu.a 00:01:48.291 [116/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:48.554 [117/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:48.554 [118/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_logs.c.o 00:01:48.554 [119/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:48.554 [120/378] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:48.554 [121/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:48.554 [122/378] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.554 [123/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:48.554 [124/378] Linking static target lib/librte_meter.a 00:01:48.554 [125/378] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.554 [126/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:48.554 [127/378] Linking static target lib/librte_mbuf.a 00:01:48.818 [128/378] Linking target lib/librte_log.so.24.1 00:01:48.818 [129/378] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:48.818 [130/378] Linking static target lib/librte_net.a 00:01:48.818 [131/378] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:48.818 [132/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:48.818 [133/378] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.818 [134/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_glue.c.o 00:01:48.818 [135/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:48.818 [136/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:48.818 [137/378] Linking static target lib/librte_cmdline.a 00:01:48.818 [138/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:48.818 [139/378] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:48.818 [140/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:48.818 [141/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:48.818 [142/378] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:48.818 [143/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:48.818 [144/378] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:48.818 [145/378] Linking static target lib/librte_timer.a 00:01:48.818 [146/378] Linking static target lib/librte_eal.a 00:01:48.818 [147/378] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:48.818 [148/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:48.818 [149/378] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:48.818 [150/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:48.818 [151/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:48.818 [152/378] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:48.818 [153/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_common.c.o 00:01:48.818 [154/378] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:48.818 [155/378] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:48.818 [156/378] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:48.818 [157/378] Linking static target lib/librte_mempool.a 00:01:48.818 [158/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_linux_auxiliary.c.o 00:01:48.818 [159/378] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:49.078 [160/378] Linking static target drivers/libtmp_rte_bus_auxiliary.a 00:01:49.078 [161/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:49.078 [162/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:49.078 [163/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:49.078 [164/378] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:49.078 [165/378] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:49.078 [166/378] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:49.078 [167/378] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:49.078 [168/378] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.078 [169/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:49.078 [170/378] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:49.078 [171/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:49.078 [172/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mp.c.o 00:01:49.078 [173/378] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:49.078 [174/378] Linking static target lib/librte_dmadev.a 00:01:49.078 [175/378] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:49.078 [176/378] Linking target lib/librte_kvargs.so.24.1 00:01:49.078 [177/378] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.078 [178/378] Linking static target lib/librte_power.a 00:01:49.078 [179/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:49.078 [180/378] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:49.078 [181/378] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.078 [182/378] Linking static target lib/librte_compressdev.a 00:01:49.078 [183/378] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:49.078 [184/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen1.c.o 00:01:49.078 [185/378] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:49.078 [186/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_common.c.o 00:01:49.078 [187/378] Linking static target lib/librte_reorder.a 00:01:49.078 [188/378] Linking target lib/librte_telemetry.so.24.1 00:01:49.078 [189/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen1.c.o 00:01:49.078 [190/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:49.078 [191/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen2.c.o 00:01:49.078 [192/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_device.c.o 00:01:49.078 [193/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen4.c.o 00:01:49.078 [194/378] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.337 [195/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_pf2vf.c.o 00:01:49.337 [196/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_pci.c.o 00:01:49.337 [197/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen_lce.c.o 00:01:49.337 [198/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:49.337 [199/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen3.c.o 00:01:49.337 [200/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen5.c.o 00:01:49.337 [201/378] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:49.337 [202/378] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:49.337 [203/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_devx.c.o 00:01:49.337 [204/378] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:49.337 [205/378] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:49.337 [206/378] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:49.337 [207/378] Linking static target lib/librte_security.a 00:01:49.337 [208/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_os.c.o 00:01:49.337 [209/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp_pmd.c.o 00:01:49.337 [210/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym.c.o 00:01:49.337 [211/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_verbs.c.o 00:01:49.337 [212/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_malloc.c.o 00:01:49.337 [213/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common.c.o 00:01:49.337 [214/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen3.c.o 00:01:49.337 [215/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen2.c.o 00:01:49.337 [216/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_auxiliary.c.o 00:01:49.337 [217/378] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:49.337 [218/378] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:49.337 [219/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen4.c.o 00:01:49.337 [220/378] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.337 [221/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen5.c.o 00:01:49.337 [222/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_nl.c.o 00:01:49.337 [223/378] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:49.337 [224/378] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:49.337 [225/378] Generating drivers/rte_bus_auxiliary.pmd.c with a custom command 00:01:49.337 [226/378] Linking static target drivers/librte_bus_vdev.a 00:01:49.337 [227/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_utils.c.o 00:01:49.337 [228/378] Compiling C object drivers/librte_bus_auxiliary.a.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:01:49.337 [229/378] Compiling C object drivers/librte_bus_auxiliary.so.24.1.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:01:49.337 [230/378] Linking static target drivers/librte_bus_auxiliary.a 00:01:49.337 [231/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_qp.c.o 00:01:49.337 [232/378] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:49.594 [233/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:49.594 [234/378] Linking static target lib/librte_hash.a 00:01:49.594 [235/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen_lce.c.o 00:01:49.594 [236/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:49.594 [237/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen2.c.o 00:01:49.594 [238/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:49.594 [239/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_crypto.c.o 00:01:49.594 [240/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:49.594 [241/378] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.594 [242/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mr.c.o 00:01:49.594 [243/378] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.594 [244/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_asym_pmd_gen1.c.o 00:01:49.594 [245/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:49.594 [246/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:49.594 [247/378] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:49.594 [248/378] Linking static target lib/librte_ethdev.a 00:01:49.594 [249/378] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:49.594 [250/378] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:49.594 [251/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_ops.c.o 00:01:49.594 [252/378] Linking static target drivers/librte_bus_pci.a 00:01:49.594 [253/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen5.c.o 00:01:49.594 [254/378] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:49.594 [255/378] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:49.594 [256/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:49.594 [257/378] Linking static target lib/librte_cryptodev.a 00:01:49.595 [258/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_private.c.o 00:01:49.595 [259/378] Generating drivers/rte_bus_auxiliary.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.595 [260/378] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.595 [261/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto.c.o 00:01:49.595 [262/378] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.595 [263/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_gcm.c.o 00:01:49.595 [264/378] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd_ops.c.o 00:01:49.852 [265/378] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.852 [266/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_zuc.c.o 00:01:49.852 [267/378] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.852 [268/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_dek.c.o 00:01:49.852 [269/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_xts.c.o 00:01:49.852 [270/378] Linking static target drivers/libtmp_rte_crypto_mlx5.a 00:01:49.852 [271/378] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.852 [272/378] Compiling C object drivers/libtmp_rte_compress_mlx5.a.p/compress_mlx5_mlx5_compress.c.o 00:01:49.852 [273/378] Linking static target drivers/libtmp_rte_compress_mlx5.a 00:01:49.852 [274/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_kasumi.c.o 00:01:49.852 [275/378] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.852 [276/378] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:49.852 [277/378] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.852 [278/378] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:49.852 [279/378] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:49.852 [280/378] Linking static target drivers/librte_mempool_ring.a 00:01:49.853 [281/378] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd.c.o 00:01:49.853 [282/378] Linking static target drivers/libtmp_rte_compress_isal.a 00:01:50.111 [283/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_chacha_poly.c.o 00:01:50.111 [284/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_devx_cmds.c.o 00:01:50.111 [285/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp.c.o 00:01:50.111 [286/378] Linking static target drivers/libtmp_rte_common_mlx5.a 00:01:50.111 [287/378] Generating drivers/rte_crypto_mlx5.pmd.c with a custom command 00:01:50.111 [288/378] Compiling C object drivers/librte_crypto_mlx5.a.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:01:50.111 [289/378] Compiling C object drivers/librte_crypto_mlx5.so.24.1.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:01:50.111 [290/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_gcm.c.o 00:01:50.111 [291/378] Linking static target drivers/librte_crypto_mlx5.a 00:01:50.111 [292/378] Generating drivers/rte_compress_mlx5.pmd.c with a custom command 00:01:50.111 [293/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen3.c.o 00:01:50.111 [294/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_snow3g.c.o 00:01:50.111 [295/378] Compiling C object drivers/librte_compress_mlx5.a.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:01:50.111 [296/378] Compiling C object drivers/librte_compress_mlx5.so.24.1.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:01:50.111 [297/378] Linking static target drivers/librte_compress_mlx5.a 00:01:50.111 [298/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen4.c.o 00:01:50.111 [299/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_sym_pmd_gen1.c.o 00:01:50.111 [300/378] Generating drivers/rte_compress_isal.pmd.c with a custom command 00:01:50.111 [301/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_mb.c.o 00:01:50.111 [302/378] Compiling C object drivers/librte_compress_isal.a.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:01:50.111 [303/378] Compiling C object drivers/librte_compress_isal.so.24.1.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:01:50.369 [304/378] Linking static target drivers/libtmp_rte_crypto_ipsec_mb.a 00:01:50.369 [305/378] Linking static target drivers/librte_compress_isal.a 00:01:50.369 [306/378] Generating drivers/rte_common_mlx5.pmd.c with a custom command 00:01:50.369 [307/378] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.369 [308/378] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.369 [309/378] Compiling C object drivers/librte_common_mlx5.a.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:01:50.369 [310/378] Compiling C object drivers/librte_common_mlx5.so.24.1.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:01:50.369 [311/378] Linking static target drivers/librte_common_mlx5.a 00:01:50.369 [312/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:50.627 [313/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym_session.c.o 00:01:50.628 [314/378] Generating drivers/rte_crypto_ipsec_mb.pmd.c with a custom command 00:01:50.628 [315/378] Compiling C object drivers/librte_crypto_ipsec_mb.so.24.1.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:01:50.892 [316/378] Compiling C object drivers/librte_crypto_ipsec_mb.a.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:01:50.892 [317/378] Linking static target drivers/librte_crypto_ipsec_mb.a 00:01:50.892 [318/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_asym.c.o 00:01:50.892 [319/378] Linking static target drivers/libtmp_rte_common_qat.a 00:01:51.151 [320/378] Generating drivers/rte_common_qat.pmd.c with a custom command 00:01:51.409 [321/378] Compiling C object drivers/librte_common_qat.so.24.1.p/meson-generated_.._rte_common_qat.pmd.c.o 00:01:51.409 [322/378] Compiling C object drivers/librte_common_qat.a.p/meson-generated_.._rte_common_qat.pmd.c.o 00:01:51.409 [323/378] Linking static target drivers/librte_common_qat.a 00:01:51.409 [324/378] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.976 [325/378] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:51.976 [326/378] Linking static target lib/librte_vhost.a 00:01:53.881 [327/378] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.259 [328/378] Generating drivers/rte_common_mlx5.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.809 [329/378] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.237 [330/378] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.237 [331/378] Linking target lib/librte_eal.so.24.1 00:01:59.237 [332/378] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:59.237 [333/378] Linking target lib/librte_pci.so.24.1 00:01:59.237 [334/378] Linking target lib/librte_dmadev.so.24.1 00:01:59.237 [335/378] Linking target drivers/librte_bus_auxiliary.so.24.1 00:01:59.237 [336/378] Linking target lib/librte_ring.so.24.1 00:01:59.237 [337/378] Linking target lib/librte_meter.so.24.1 00:01:59.237 [338/378] Linking target lib/librte_timer.so.24.1 00:01:59.237 [339/378] Linking target drivers/librte_bus_vdev.so.24.1 00:01:59.494 [340/378] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:59.494 [341/378] Generating symbol file drivers/librte_bus_auxiliary.so.24.1.p/librte_bus_auxiliary.so.24.1.symbols 00:01:59.494 [342/378] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:59.494 [343/378] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:59.494 [344/378] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:59.494 [345/378] Generating symbol file drivers/librte_bus_vdev.so.24.1.p/librte_bus_vdev.so.24.1.symbols 00:01:59.494 [346/378] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:59.494 [347/378] Linking target lib/librte_rcu.so.24.1 00:01:59.494 [348/378] Linking target lib/librte_mempool.so.24.1 00:01:59.494 [349/378] Linking target drivers/librte_bus_pci.so.24.1 00:01:59.752 [350/378] Generating symbol file drivers/librte_bus_pci.so.24.1.p/librte_bus_pci.so.24.1.symbols 00:01:59.752 [351/378] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:59.752 [352/378] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:59.752 [353/378] Linking target drivers/librte_mempool_ring.so.24.1 00:01:59.752 [354/378] Linking target lib/librte_mbuf.so.24.1 00:02:00.010 [355/378] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:00.010 [356/378] Linking target lib/librte_reorder.so.24.1 00:02:00.010 [357/378] Linking target lib/librte_compressdev.so.24.1 00:02:00.010 [358/378] Linking target lib/librte_net.so.24.1 00:02:00.010 [359/378] Linking target lib/librte_cryptodev.so.24.1 00:02:00.010 [360/378] Generating symbol file lib/librte_compressdev.so.24.1.p/librte_compressdev.so.24.1.symbols 00:02:00.010 [361/378] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:00.010 [362/378] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:00.268 [363/378] Linking target lib/librte_security.so.24.1 00:02:00.268 [364/378] Linking target lib/librte_cmdline.so.24.1 00:02:00.268 [365/378] Linking target drivers/librte_compress_isal.so.24.1 00:02:00.268 [366/378] Linking target lib/librte_ethdev.so.24.1 00:02:00.268 [367/378] Linking target lib/librte_hash.so.24.1 00:02:00.268 [368/378] Generating symbol file lib/librte_security.so.24.1.p/librte_security.so.24.1.symbols 00:02:00.268 [369/378] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:00.268 [370/378] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:00.527 [371/378] Linking target lib/librte_power.so.24.1 00:02:00.527 [372/378] Linking target drivers/librte_common_mlx5.so.24.1 00:02:00.527 [373/378] Linking target lib/librte_vhost.so.24.1 00:02:00.527 [374/378] Generating symbol file drivers/librte_common_mlx5.so.24.1.p/librte_common_mlx5.so.24.1.symbols 00:02:00.527 [375/378] Linking target drivers/librte_crypto_ipsec_mb.so.24.1 00:02:00.786 [376/378] Linking target drivers/librte_compress_mlx5.so.24.1 00:02:00.786 [377/378] Linking target drivers/librte_common_qat.so.24.1 00:02:00.786 [378/378] Linking target drivers/librte_crypto_mlx5.so.24.1 00:02:00.786 INFO: autodetecting backend as ninja 00:02:00.786 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp -j 88 00:02:02.688 CC lib/log/log.o 00:02:02.688 CC lib/log/log_flags.o 00:02:02.688 CC lib/log/log_deprecated.o 00:02:02.688 CC lib/ut_mock/mock.o 00:02:02.688 CC lib/ut/ut.o 00:02:02.688 LIB libspdk_log.a 00:02:02.688 LIB libspdk_ut.a 00:02:02.689 LIB libspdk_ut_mock.a 00:02:02.689 SO libspdk_log.so.7.0 00:02:02.689 SO libspdk_ut.so.2.0 00:02:02.689 SO libspdk_ut_mock.so.6.0 00:02:02.689 SYMLINK libspdk_log.so 00:02:02.689 SYMLINK libspdk_ut.so 00:02:02.689 SYMLINK libspdk_ut_mock.so 00:02:02.946 CC lib/util/bit_array.o 00:02:02.946 CC lib/util/base64.o 00:02:02.946 CC lib/util/cpuset.o 00:02:02.946 CC lib/util/crc16.o 00:02:02.946 CC lib/util/crc32.o 00:02:02.946 CC lib/util/crc32c.o 00:02:02.946 CC lib/util/crc32_ieee.o 00:02:02.946 CC lib/util/crc64.o 00:02:02.946 CC lib/util/dif.o 00:02:03.203 CC lib/util/fd.o 00:02:03.203 CC lib/dma/dma.o 00:02:03.203 CC lib/util/hexlify.o 00:02:03.203 CC lib/util/file.o 00:02:03.203 CC lib/util/iov.o 00:02:03.203 CC lib/util/math.o 00:02:03.203 CC lib/util/pipe.o 00:02:03.203 CC lib/util/strerror_tls.o 00:02:03.203 CC lib/ioat/ioat.o 00:02:03.203 CC lib/util/string.o 00:02:03.203 CXX lib/trace_parser/trace.o 00:02:03.203 CC lib/util/uuid.o 00:02:03.203 CC lib/util/fd_group.o 00:02:03.203 CC lib/util/xor.o 00:02:03.203 CC lib/util/zipf.o 00:02:03.203 CC lib/vfio_user/host/vfio_user_pci.o 00:02:03.203 CC lib/vfio_user/host/vfio_user.o 00:02:03.203 LIB libspdk_dma.a 00:02:03.203 SO libspdk_dma.so.4.0 00:02:03.460 SYMLINK libspdk_dma.so 00:02:03.460 LIB libspdk_ioat.a 00:02:03.460 SO libspdk_ioat.so.7.0 00:02:03.460 LIB libspdk_vfio_user.a 00:02:03.460 SO libspdk_vfio_user.so.5.0 00:02:03.460 SYMLINK libspdk_ioat.so 00:02:03.460 SYMLINK libspdk_vfio_user.so 00:02:03.717 LIB libspdk_util.a 00:02:03.717 SO libspdk_util.so.9.1 00:02:03.976 SYMLINK libspdk_util.so 00:02:03.976 LIB libspdk_trace_parser.a 00:02:03.976 SO libspdk_trace_parser.so.5.0 00:02:04.233 CC lib/reduce/reduce.o 00:02:04.233 CC lib/json/json_parse.o 00:02:04.233 CC lib/json/json_util.o 00:02:04.233 CC lib/json/json_write.o 00:02:04.233 CC lib/conf/conf.o 00:02:04.233 CC lib/vmd/vmd.o 00:02:04.233 CC lib/vmd/led.o 00:02:04.233 CC lib/idxd/idxd.o 00:02:04.233 CC lib/idxd/idxd_user.o 00:02:04.233 CC lib/idxd/idxd_kernel.o 00:02:04.233 CC lib/rdma_provider/common.o 00:02:04.233 CC lib/env_dpdk/env.o 00:02:04.233 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:04.233 CC lib/env_dpdk/memory.o 00:02:04.233 CC lib/env_dpdk/pci.o 00:02:04.233 CC lib/env_dpdk/init.o 00:02:04.233 CC lib/env_dpdk/threads.o 00:02:04.233 CC lib/env_dpdk/pci_ioat.o 00:02:04.233 CC lib/env_dpdk/pci_virtio.o 00:02:04.233 SYMLINK libspdk_trace_parser.so 00:02:04.233 CC lib/rdma_utils/rdma_utils.o 00:02:04.233 CC lib/env_dpdk/pci_vmd.o 00:02:04.233 CC lib/env_dpdk/pci_idxd.o 00:02:04.233 CC lib/env_dpdk/pci_event.o 00:02:04.233 CC lib/env_dpdk/sigbus_handler.o 00:02:04.233 CC lib/env_dpdk/pci_dpdk.o 00:02:04.233 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:04.233 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:04.490 LIB libspdk_rdma_provider.a 00:02:04.490 LIB libspdk_json.a 00:02:04.490 SO libspdk_rdma_provider.so.6.0 00:02:04.490 LIB libspdk_rdma_utils.a 00:02:04.490 SO libspdk_json.so.6.0 00:02:04.748 SO libspdk_rdma_utils.so.1.0 00:02:04.748 SYMLINK libspdk_rdma_provider.so 00:02:04.748 SYMLINK libspdk_json.so 00:02:04.748 LIB libspdk_conf.a 00:02:04.748 SYMLINK libspdk_rdma_utils.so 00:02:04.748 SO libspdk_conf.so.6.0 00:02:04.748 SYMLINK libspdk_conf.so 00:02:04.748 LIB libspdk_idxd.a 00:02:04.748 SO libspdk_idxd.so.12.0 00:02:05.006 LIB libspdk_reduce.a 00:02:05.006 LIB libspdk_vmd.a 00:02:05.006 SO libspdk_reduce.so.6.0 00:02:05.006 SO libspdk_vmd.so.6.0 00:02:05.006 CC lib/jsonrpc/jsonrpc_server.o 00:02:05.006 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:05.006 CC lib/jsonrpc/jsonrpc_client.o 00:02:05.006 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:05.006 SYMLINK libspdk_idxd.so 00:02:05.006 SYMLINK libspdk_vmd.so 00:02:05.006 SYMLINK libspdk_reduce.so 00:02:05.264 LIB libspdk_jsonrpc.a 00:02:05.264 SO libspdk_jsonrpc.so.6.0 00:02:05.264 SYMLINK libspdk_jsonrpc.so 00:02:05.830 CC lib/rpc/rpc.o 00:02:05.830 LIB libspdk_env_dpdk.a 00:02:05.830 SO libspdk_env_dpdk.so.14.1 00:02:05.830 LIB libspdk_rpc.a 00:02:05.830 SO libspdk_rpc.so.6.0 00:02:06.089 SYMLINK libspdk_rpc.so 00:02:06.089 SYMLINK libspdk_env_dpdk.so 00:02:06.347 CC lib/notify/notify.o 00:02:06.347 CC lib/notify/notify_rpc.o 00:02:06.347 CC lib/trace/trace.o 00:02:06.347 CC lib/trace/trace_flags.o 00:02:06.347 CC lib/trace/trace_rpc.o 00:02:06.347 CC lib/keyring/keyring.o 00:02:06.347 CC lib/keyring/keyring_rpc.o 00:02:06.605 LIB libspdk_notify.a 00:02:06.605 SO libspdk_notify.so.6.0 00:02:06.605 LIB libspdk_keyring.a 00:02:06.605 LIB libspdk_trace.a 00:02:06.605 SO libspdk_keyring.so.1.0 00:02:06.605 SO libspdk_trace.so.10.0 00:02:06.605 SYMLINK libspdk_notify.so 00:02:06.605 SYMLINK libspdk_keyring.so 00:02:06.605 SYMLINK libspdk_trace.so 00:02:07.173 CC lib/thread/thread.o 00:02:07.173 CC lib/thread/iobuf.o 00:02:07.173 CC lib/sock/sock.o 00:02:07.173 CC lib/sock/sock_rpc.o 00:02:07.432 LIB libspdk_sock.a 00:02:07.432 SO libspdk_sock.so.10.0 00:02:07.432 SYMLINK libspdk_sock.so 00:02:07.691 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:07.691 CC lib/nvme/nvme_ctrlr.o 00:02:07.691 CC lib/nvme/nvme_fabric.o 00:02:07.691 CC lib/nvme/nvme_ns.o 00:02:07.691 CC lib/nvme/nvme_ns_cmd.o 00:02:07.691 CC lib/nvme/nvme_pcie_common.o 00:02:07.691 CC lib/nvme/nvme_pcie.o 00:02:07.691 CC lib/nvme/nvme_qpair.o 00:02:07.691 CC lib/nvme/nvme.o 00:02:07.691 CC lib/nvme/nvme_quirks.o 00:02:07.691 CC lib/nvme/nvme_transport.o 00:02:07.691 CC lib/nvme/nvme_discovery.o 00:02:07.691 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:07.691 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:07.691 CC lib/nvme/nvme_tcp.o 00:02:07.691 CC lib/nvme/nvme_opal.o 00:02:07.691 CC lib/nvme/nvme_io_msg.o 00:02:07.691 CC lib/nvme/nvme_poll_group.o 00:02:07.691 CC lib/nvme/nvme_zns.o 00:02:07.691 CC lib/nvme/nvme_stubs.o 00:02:07.691 CC lib/nvme/nvme_auth.o 00:02:07.691 CC lib/nvme/nvme_cuse.o 00:02:07.691 CC lib/nvme/nvme_rdma.o 00:02:08.624 LIB libspdk_thread.a 00:02:08.624 SO libspdk_thread.so.10.1 00:02:08.624 SYMLINK libspdk_thread.so 00:02:09.190 CC lib/accel/accel.o 00:02:09.190 CC lib/accel/accel_rpc.o 00:02:09.190 CC lib/accel/accel_sw.o 00:02:09.190 CC lib/blob/blobstore.o 00:02:09.190 CC lib/init/json_config.o 00:02:09.190 CC lib/blob/request.o 00:02:09.190 CC lib/init/subsystem.o 00:02:09.190 CC lib/blob/zeroes.o 00:02:09.190 CC lib/init/subsystem_rpc.o 00:02:09.190 CC lib/blob/blob_bs_dev.o 00:02:09.190 CC lib/init/rpc.o 00:02:09.190 CC lib/virtio/virtio.o 00:02:09.190 CC lib/virtio/virtio_vhost_user.o 00:02:09.190 CC lib/virtio/virtio_vfio_user.o 00:02:09.190 CC lib/virtio/virtio_pci.o 00:02:09.448 LIB libspdk_init.a 00:02:09.448 SO libspdk_init.so.5.0 00:02:09.448 LIB libspdk_virtio.a 00:02:09.448 SO libspdk_virtio.so.7.0 00:02:09.448 SYMLINK libspdk_init.so 00:02:09.448 SYMLINK libspdk_virtio.so 00:02:09.707 CC lib/event/app.o 00:02:09.707 CC lib/event/reactor.o 00:02:09.707 CC lib/event/log_rpc.o 00:02:09.707 CC lib/event/app_rpc.o 00:02:09.707 CC lib/event/scheduler_static.o 00:02:09.965 LIB libspdk_accel.a 00:02:10.224 SO libspdk_accel.so.15.1 00:02:10.224 LIB libspdk_event.a 00:02:10.224 SYMLINK libspdk_accel.so 00:02:10.224 SO libspdk_event.so.14.0 00:02:10.483 SYMLINK libspdk_event.so 00:02:10.483 CC lib/bdev/bdev.o 00:02:10.483 CC lib/bdev/bdev_rpc.o 00:02:10.483 CC lib/bdev/bdev_zone.o 00:02:10.483 CC lib/bdev/scsi_nvme.o 00:02:10.483 CC lib/bdev/part.o 00:02:10.741 LIB libspdk_nvme.a 00:02:10.741 SO libspdk_nvme.so.13.1 00:02:11.000 SYMLINK libspdk_nvme.so 00:02:12.377 LIB libspdk_blob.a 00:02:12.377 SO libspdk_blob.so.11.0 00:02:12.377 SYMLINK libspdk_blob.so 00:02:12.636 CC lib/lvol/lvol.o 00:02:12.636 CC lib/blobfs/blobfs.o 00:02:12.636 CC lib/blobfs/tree.o 00:02:13.202 LIB libspdk_bdev.a 00:02:13.461 SO libspdk_bdev.so.15.1 00:02:13.461 SYMLINK libspdk_bdev.so 00:02:13.461 LIB libspdk_blobfs.a 00:02:13.461 SO libspdk_blobfs.so.10.0 00:02:13.719 SYMLINK libspdk_blobfs.so 00:02:13.720 LIB libspdk_lvol.a 00:02:13.720 SO libspdk_lvol.so.10.0 00:02:13.720 CC lib/ftl/ftl_core.o 00:02:13.720 CC lib/ftl/ftl_init.o 00:02:13.720 CC lib/ftl/ftl_layout.o 00:02:13.720 CC lib/ftl/ftl_debug.o 00:02:13.720 CC lib/ftl/ftl_io.o 00:02:13.720 CC lib/ftl/ftl_sb.o 00:02:13.720 CC lib/ftl/ftl_l2p.o 00:02:13.720 CC lib/ftl/ftl_l2p_flat.o 00:02:13.720 CC lib/nbd/nbd.o 00:02:13.720 CC lib/nbd/nbd_rpc.o 00:02:13.720 CC lib/ftl/ftl_nv_cache.o 00:02:13.720 CC lib/scsi/dev.o 00:02:13.720 CC lib/ftl/ftl_band_ops.o 00:02:13.720 CC lib/ftl/ftl_band.o 00:02:13.720 CC lib/scsi/lun.o 00:02:13.720 CC lib/nvmf/ctrlr.o 00:02:13.720 CC lib/ftl/ftl_writer.o 00:02:13.720 CC lib/nvmf/ctrlr_discovery.o 00:02:13.720 CC lib/scsi/port.o 00:02:13.720 CC lib/scsi/scsi.o 00:02:13.720 CC lib/ftl/ftl_rq.o 00:02:13.720 CC lib/nvmf/ctrlr_bdev.o 00:02:13.720 CC lib/ftl/ftl_reloc.o 00:02:13.720 CC lib/nvmf/subsystem.o 00:02:13.720 CC lib/scsi/scsi_bdev.o 00:02:13.720 CC lib/ftl/ftl_l2p_cache.o 00:02:13.720 CC lib/scsi/scsi_rpc.o 00:02:13.720 CC lib/scsi/scsi_pr.o 00:02:13.720 CC lib/nvmf/nvmf.o 00:02:13.720 CC lib/ftl/ftl_p2l.o 00:02:13.720 CC lib/scsi/task.o 00:02:13.720 CC lib/nvmf/nvmf_rpc.o 00:02:13.720 CC lib/ftl/mngt/ftl_mngt.o 00:02:13.720 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:13.720 CC lib/nvmf/tcp.o 00:02:13.720 CC lib/nvmf/transport.o 00:02:13.720 CC lib/ublk/ublk.o 00:02:13.720 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:13.720 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:13.720 CC lib/ublk/ublk_rpc.o 00:02:13.720 CC lib/nvmf/stubs.o 00:02:13.720 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:13.720 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:13.720 CC lib/nvmf/mdns_server.o 00:02:13.720 CC lib/nvmf/rdma.o 00:02:13.720 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:13.720 CC lib/nvmf/auth.o 00:02:13.720 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:13.720 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:13.720 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:13.720 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:13.720 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:13.720 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:13.720 CC lib/ftl/utils/ftl_conf.o 00:02:13.720 CC lib/ftl/utils/ftl_md.o 00:02:13.720 CC lib/ftl/utils/ftl_mempool.o 00:02:13.720 CC lib/ftl/utils/ftl_bitmap.o 00:02:13.720 CC lib/ftl/utils/ftl_property.o 00:02:13.720 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:13.720 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:13.720 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:13.720 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:13.720 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:13.720 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:13.720 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:13.720 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:13.720 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:13.720 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:13.720 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:13.720 SYMLINK libspdk_lvol.so 00:02:13.720 CC lib/ftl/base/ftl_base_dev.o 00:02:13.720 CC lib/ftl/base/ftl_base_bdev.o 00:02:13.720 CC lib/ftl/ftl_trace.o 00:02:14.308 LIB libspdk_nbd.a 00:02:14.308 SO libspdk_nbd.so.7.0 00:02:14.610 LIB libspdk_scsi.a 00:02:14.610 SYMLINK libspdk_nbd.so 00:02:14.610 SO libspdk_scsi.so.9.0 00:02:14.610 SYMLINK libspdk_scsi.so 00:02:14.610 LIB libspdk_ublk.a 00:02:14.610 SO libspdk_ublk.so.3.0 00:02:14.869 SYMLINK libspdk_ublk.so 00:02:14.869 CC lib/iscsi/conn.o 00:02:14.869 CC lib/iscsi/init_grp.o 00:02:14.869 CC lib/iscsi/md5.o 00:02:14.869 CC lib/iscsi/iscsi.o 00:02:14.869 CC lib/iscsi/param.o 00:02:14.869 CC lib/iscsi/tgt_node.o 00:02:14.869 CC lib/iscsi/portal_grp.o 00:02:14.869 CC lib/iscsi/iscsi_subsystem.o 00:02:14.869 CC lib/iscsi/iscsi_rpc.o 00:02:14.869 CC lib/iscsi/task.o 00:02:14.869 CC lib/vhost/vhost_rpc.o 00:02:14.869 CC lib/vhost/vhost.o 00:02:14.869 CC lib/vhost/vhost_scsi.o 00:02:14.869 CC lib/vhost/vhost_blk.o 00:02:14.869 CC lib/vhost/rte_vhost_user.o 00:02:15.127 LIB libspdk_ftl.a 00:02:15.127 SO libspdk_ftl.so.9.0 00:02:15.694 SYMLINK libspdk_ftl.so 00:02:15.952 LIB libspdk_vhost.a 00:02:15.952 LIB libspdk_nvmf.a 00:02:16.210 SO libspdk_vhost.so.8.0 00:02:16.210 SO libspdk_nvmf.so.18.1 00:02:16.210 SYMLINK libspdk_vhost.so 00:02:16.468 LIB libspdk_iscsi.a 00:02:16.468 SO libspdk_iscsi.so.8.0 00:02:16.468 SYMLINK libspdk_nvmf.so 00:02:16.726 SYMLINK libspdk_iscsi.so 00:02:17.293 CC module/env_dpdk/env_dpdk_rpc.o 00:02:17.293 CC module/blob/bdev/blob_bdev.o 00:02:17.293 CC module/accel/ioat/accel_ioat.o 00:02:17.293 CC module/accel/ioat/accel_ioat_rpc.o 00:02:17.293 CC module/sock/posix/posix.o 00:02:17.293 CC module/keyring/linux/keyring.o 00:02:17.293 CC module/accel/error/accel_error.o 00:02:17.293 CC module/keyring/linux/keyring_rpc.o 00:02:17.293 CC module/keyring/file/keyring.o 00:02:17.293 CC module/accel/error/accel_error_rpc.o 00:02:17.293 CC module/keyring/file/keyring_rpc.o 00:02:17.293 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev.o 00:02:17.293 CC module/accel/dsa/accel_dsa.o 00:02:17.293 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev_rpc.o 00:02:17.293 CC module/accel/dsa/accel_dsa_rpc.o 00:02:17.293 CC module/accel/iaa/accel_iaa.o 00:02:17.293 CC module/accel/iaa/accel_iaa_rpc.o 00:02:17.293 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev_rpc.o 00:02:17.293 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev.o 00:02:17.293 LIB libspdk_env_dpdk_rpc.a 00:02:17.293 CC module/scheduler/gscheduler/gscheduler.o 00:02:17.293 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:17.293 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:17.550 SO libspdk_env_dpdk_rpc.so.6.0 00:02:17.550 SYMLINK libspdk_env_dpdk_rpc.so 00:02:17.550 LIB libspdk_keyring_file.a 00:02:17.550 LIB libspdk_keyring_linux.a 00:02:17.550 LIB libspdk_scheduler_gscheduler.a 00:02:17.550 SO libspdk_keyring_file.so.1.0 00:02:17.550 LIB libspdk_scheduler_dpdk_governor.a 00:02:17.550 LIB libspdk_accel_error.a 00:02:17.550 SO libspdk_keyring_linux.so.1.0 00:02:17.550 SO libspdk_scheduler_gscheduler.so.4.0 00:02:17.550 LIB libspdk_accel_ioat.a 00:02:17.550 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:17.550 LIB libspdk_accel_iaa.a 00:02:17.550 LIB libspdk_scheduler_dynamic.a 00:02:17.550 SO libspdk_accel_error.so.2.0 00:02:17.809 SYMLINK libspdk_keyring_file.so 00:02:17.809 SO libspdk_accel_ioat.so.6.0 00:02:17.809 SO libspdk_scheduler_dynamic.so.4.0 00:02:17.809 SYMLINK libspdk_scheduler_gscheduler.so 00:02:17.809 SYMLINK libspdk_keyring_linux.so 00:02:17.809 SO libspdk_accel_iaa.so.3.0 00:02:17.809 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:17.809 LIB libspdk_accel_dsa.a 00:02:17.809 SYMLINK libspdk_scheduler_dynamic.so 00:02:17.809 SYMLINK libspdk_accel_ioat.so 00:02:17.809 SO libspdk_accel_dsa.so.5.0 00:02:17.809 SYMLINK libspdk_accel_iaa.so 00:02:17.809 SYMLINK libspdk_accel_error.so 00:02:17.809 SYMLINK libspdk_accel_dsa.so 00:02:18.067 LIB libspdk_blob_bdev.a 00:02:18.067 SO libspdk_blob_bdev.so.11.0 00:02:18.067 SYMLINK libspdk_blob_bdev.so 00:02:18.067 LIB libspdk_sock_posix.a 00:02:18.325 SO libspdk_sock_posix.so.6.0 00:02:18.325 SYMLINK libspdk_sock_posix.so 00:02:18.582 CC module/bdev/error/vbdev_error.o 00:02:18.582 CC module/blobfs/bdev/blobfs_bdev.o 00:02:18.582 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:18.582 CC module/bdev/error/vbdev_error_rpc.o 00:02:18.582 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:18.582 CC module/bdev/nvme/bdev_nvme.o 00:02:18.582 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:18.582 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:18.582 CC module/bdev/nvme/nvme_rpc.o 00:02:18.582 CC module/bdev/delay/vbdev_delay.o 00:02:18.582 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:18.582 CC module/bdev/nvme/bdev_mdns_client.o 00:02:18.582 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:18.582 CC module/bdev/malloc/bdev_malloc.o 00:02:18.582 CC module/bdev/nvme/vbdev_opal.o 00:02:18.582 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:18.582 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:18.582 CC module/bdev/lvol/vbdev_lvol.o 00:02:18.582 CC module/bdev/split/vbdev_split.o 00:02:18.582 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:18.582 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:18.582 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:18.582 CC module/bdev/split/vbdev_split_rpc.o 00:02:18.582 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:18.582 CC module/bdev/crypto/vbdev_crypto.o 00:02:18.582 CC module/bdev/passthru/vbdev_passthru.o 00:02:18.582 CC module/bdev/crypto/vbdev_crypto_rpc.o 00:02:18.582 CC module/bdev/gpt/gpt.o 00:02:18.582 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:18.582 CC module/bdev/gpt/vbdev_gpt.o 00:02:18.582 CC module/bdev/null/bdev_null.o 00:02:18.582 CC module/bdev/null/bdev_null_rpc.o 00:02:18.582 CC module/bdev/iscsi/bdev_iscsi.o 00:02:18.582 CC module/bdev/ftl/bdev_ftl.o 00:02:18.582 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:18.582 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:18.582 LIB libspdk_accel_dpdk_compressdev.a 00:02:18.582 CC module/bdev/compress/vbdev_compress_rpc.o 00:02:18.582 CC module/bdev/compress/vbdev_compress.o 00:02:18.582 CC module/bdev/raid/bdev_raid_rpc.o 00:02:18.582 CC module/bdev/raid/bdev_raid.o 00:02:18.582 CC module/bdev/raid/bdev_raid_sb.o 00:02:18.582 CC module/bdev/aio/bdev_aio.o 00:02:18.582 CC module/bdev/raid/raid0.o 00:02:18.582 CC module/bdev/raid/raid1.o 00:02:18.582 CC module/bdev/raid/concat.o 00:02:18.582 CC module/bdev/aio/bdev_aio_rpc.o 00:02:18.582 SO libspdk_accel_dpdk_compressdev.so.3.0 00:02:18.582 SYMLINK libspdk_accel_dpdk_compressdev.so 00:02:18.840 LIB libspdk_bdev_split.a 00:02:18.840 LIB libspdk_accel_dpdk_cryptodev.a 00:02:18.840 LIB libspdk_bdev_error.a 00:02:18.840 SO libspdk_bdev_split.so.6.0 00:02:18.840 SO libspdk_accel_dpdk_cryptodev.so.3.0 00:02:18.840 LIB libspdk_bdev_malloc.a 00:02:18.840 LIB libspdk_bdev_aio.a 00:02:18.840 LIB libspdk_blobfs_bdev.a 00:02:18.840 SO libspdk_bdev_error.so.6.0 00:02:18.840 LIB libspdk_bdev_ftl.a 00:02:18.840 LIB libspdk_bdev_gpt.a 00:02:18.840 SO libspdk_bdev_malloc.so.6.0 00:02:18.840 SO libspdk_bdev_aio.so.6.0 00:02:18.840 SO libspdk_blobfs_bdev.so.6.0 00:02:18.840 SYMLINK libspdk_bdev_split.so 00:02:19.098 LIB libspdk_bdev_passthru.a 00:02:19.098 LIB libspdk_bdev_zone_block.a 00:02:19.098 SO libspdk_bdev_gpt.so.6.0 00:02:19.098 SO libspdk_bdev_ftl.so.6.0 00:02:19.098 SYMLINK libspdk_accel_dpdk_cryptodev.so 00:02:19.098 LIB libspdk_bdev_crypto.a 00:02:19.098 SYMLINK libspdk_bdev_error.so 00:02:19.098 SO libspdk_bdev_passthru.so.6.0 00:02:19.098 SO libspdk_bdev_zone_block.so.6.0 00:02:19.098 SYMLINK libspdk_bdev_malloc.so 00:02:19.098 SYMLINK libspdk_blobfs_bdev.so 00:02:19.098 SYMLINK libspdk_bdev_aio.so 00:02:19.098 SO libspdk_bdev_crypto.so.6.0 00:02:19.098 LIB libspdk_bdev_delay.a 00:02:19.098 SYMLINK libspdk_bdev_gpt.so 00:02:19.098 LIB libspdk_bdev_iscsi.a 00:02:19.098 SYMLINK libspdk_bdev_ftl.so 00:02:19.098 SYMLINK libspdk_bdev_passthru.so 00:02:19.098 SO libspdk_bdev_delay.so.6.0 00:02:19.098 SYMLINK libspdk_bdev_zone_block.so 00:02:19.098 SO libspdk_bdev_iscsi.so.6.0 00:02:19.098 LIB libspdk_bdev_compress.a 00:02:19.098 SYMLINK libspdk_bdev_crypto.so 00:02:19.098 LIB libspdk_bdev_null.a 00:02:19.098 SO libspdk_bdev_compress.so.6.0 00:02:19.098 SYMLINK libspdk_bdev_delay.so 00:02:19.098 SO libspdk_bdev_null.so.6.0 00:02:19.098 SYMLINK libspdk_bdev_iscsi.so 00:02:19.098 LIB libspdk_bdev_virtio.a 00:02:19.356 SYMLINK libspdk_bdev_compress.so 00:02:19.356 SO libspdk_bdev_virtio.so.6.0 00:02:19.356 SYMLINK libspdk_bdev_null.so 00:02:19.356 SYMLINK libspdk_bdev_virtio.so 00:02:19.613 LIB libspdk_bdev_lvol.a 00:02:19.613 SO libspdk_bdev_lvol.so.6.0 00:02:19.876 SYMLINK libspdk_bdev_lvol.so 00:02:19.876 LIB libspdk_bdev_raid.a 00:02:19.876 SO libspdk_bdev_raid.so.6.0 00:02:19.876 SYMLINK libspdk_bdev_raid.so 00:02:20.811 LIB libspdk_bdev_nvme.a 00:02:21.069 SO libspdk_bdev_nvme.so.7.0 00:02:21.069 SYMLINK libspdk_bdev_nvme.so 00:02:21.636 CC module/event/subsystems/iobuf/iobuf.o 00:02:21.636 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:21.636 CC module/event/subsystems/keyring/keyring.o 00:02:21.636 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:21.636 CC module/event/subsystems/scheduler/scheduler.o 00:02:21.636 CC module/event/subsystems/vmd/vmd.o 00:02:21.636 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:21.636 CC module/event/subsystems/sock/sock.o 00:02:21.894 LIB libspdk_event_vhost_blk.a 00:02:21.894 LIB libspdk_event_keyring.a 00:02:21.894 LIB libspdk_event_scheduler.a 00:02:21.894 LIB libspdk_event_iobuf.a 00:02:21.894 LIB libspdk_event_sock.a 00:02:21.894 LIB libspdk_event_vmd.a 00:02:21.894 SO libspdk_event_vhost_blk.so.3.0 00:02:21.894 SO libspdk_event_keyring.so.1.0 00:02:21.894 SO libspdk_event_scheduler.so.4.0 00:02:21.894 SO libspdk_event_iobuf.so.3.0 00:02:21.894 SO libspdk_event_sock.so.5.0 00:02:21.894 SO libspdk_event_vmd.so.6.0 00:02:21.894 SYMLINK libspdk_event_keyring.so 00:02:21.894 SYMLINK libspdk_event_vhost_blk.so 00:02:22.152 SYMLINK libspdk_event_scheduler.so 00:02:22.152 SYMLINK libspdk_event_sock.so 00:02:22.152 SYMLINK libspdk_event_iobuf.so 00:02:22.152 SYMLINK libspdk_event_vmd.so 00:02:22.410 CC module/event/subsystems/accel/accel.o 00:02:22.410 LIB libspdk_event_accel.a 00:02:22.670 SO libspdk_event_accel.so.6.0 00:02:22.670 SYMLINK libspdk_event_accel.so 00:02:22.928 CC module/event/subsystems/bdev/bdev.o 00:02:23.187 LIB libspdk_event_bdev.a 00:02:23.187 SO libspdk_event_bdev.so.6.0 00:02:23.187 SYMLINK libspdk_event_bdev.so 00:02:23.445 CC module/event/subsystems/scsi/scsi.o 00:02:23.445 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:23.445 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:23.445 CC module/event/subsystems/ublk/ublk.o 00:02:23.445 CC module/event/subsystems/nbd/nbd.o 00:02:23.704 LIB libspdk_event_ublk.a 00:02:23.704 LIB libspdk_event_nbd.a 00:02:23.704 SO libspdk_event_ublk.so.3.0 00:02:23.704 SO libspdk_event_nbd.so.6.0 00:02:23.704 LIB libspdk_event_scsi.a 00:02:23.704 SO libspdk_event_scsi.so.6.0 00:02:23.704 LIB libspdk_event_nvmf.a 00:02:23.704 SYMLINK libspdk_event_nbd.so 00:02:23.704 SYMLINK libspdk_event_ublk.so 00:02:23.704 SO libspdk_event_nvmf.so.6.0 00:02:23.963 SYMLINK libspdk_event_scsi.so 00:02:23.963 SYMLINK libspdk_event_nvmf.so 00:02:24.221 CC module/event/subsystems/iscsi/iscsi.o 00:02:24.221 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:24.221 LIB libspdk_event_iscsi.a 00:02:24.479 SO libspdk_event_iscsi.so.6.0 00:02:24.479 LIB libspdk_event_vhost_scsi.a 00:02:24.479 SYMLINK libspdk_event_iscsi.so 00:02:24.479 SO libspdk_event_vhost_scsi.so.3.0 00:02:24.479 SYMLINK libspdk_event_vhost_scsi.so 00:02:24.738 SO libspdk.so.6.0 00:02:24.738 SYMLINK libspdk.so 00:02:24.996 CC app/spdk_lspci/spdk_lspci.o 00:02:24.996 CC app/spdk_nvme_perf/perf.o 00:02:24.996 CC app/spdk_nvme_discover/discovery_aer.o 00:02:24.996 CXX app/trace/trace.o 00:02:24.996 CC app/trace_record/trace_record.o 00:02:24.996 CC app/spdk_nvme_identify/identify.o 00:02:24.996 CC app/spdk_top/spdk_top.o 00:02:24.996 CC test/rpc_client/rpc_client_test.o 00:02:24.996 TEST_HEADER include/spdk/accel.h 00:02:24.996 TEST_HEADER include/spdk/assert.h 00:02:24.996 TEST_HEADER include/spdk/barrier.h 00:02:24.996 TEST_HEADER include/spdk/accel_module.h 00:02:24.996 TEST_HEADER include/spdk/base64.h 00:02:24.996 TEST_HEADER include/spdk/bdev_module.h 00:02:24.996 TEST_HEADER include/spdk/bdev.h 00:02:24.996 TEST_HEADER include/spdk/bdev_zone.h 00:02:24.996 TEST_HEADER include/spdk/bit_array.h 00:02:24.996 TEST_HEADER include/spdk/bit_pool.h 00:02:24.996 TEST_HEADER include/spdk/blob_bdev.h 00:02:24.996 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:24.996 TEST_HEADER include/spdk/blob.h 00:02:24.996 TEST_HEADER include/spdk/blobfs.h 00:02:24.996 TEST_HEADER include/spdk/conf.h 00:02:24.996 TEST_HEADER include/spdk/config.h 00:02:24.996 TEST_HEADER include/spdk/crc16.h 00:02:24.996 TEST_HEADER include/spdk/cpuset.h 00:02:24.996 TEST_HEADER include/spdk/crc32.h 00:02:24.996 TEST_HEADER include/spdk/crc64.h 00:02:24.996 TEST_HEADER include/spdk/dma.h 00:02:24.996 TEST_HEADER include/spdk/dif.h 00:02:24.996 TEST_HEADER include/spdk/endian.h 00:02:24.996 TEST_HEADER include/spdk/env.h 00:02:24.996 TEST_HEADER include/spdk/env_dpdk.h 00:02:24.996 TEST_HEADER include/spdk/event.h 00:02:24.996 TEST_HEADER include/spdk/fd_group.h 00:02:24.996 TEST_HEADER include/spdk/fd.h 00:02:24.996 TEST_HEADER include/spdk/file.h 00:02:24.996 TEST_HEADER include/spdk/ftl.h 00:02:24.996 TEST_HEADER include/spdk/gpt_spec.h 00:02:24.996 TEST_HEADER include/spdk/histogram_data.h 00:02:24.996 TEST_HEADER include/spdk/idxd.h 00:02:24.996 TEST_HEADER include/spdk/hexlify.h 00:02:24.996 CC app/spdk_dd/spdk_dd.o 00:02:24.996 TEST_HEADER include/spdk/idxd_spec.h 00:02:24.996 TEST_HEADER include/spdk/init.h 00:02:24.996 TEST_HEADER include/spdk/ioat_spec.h 00:02:24.996 TEST_HEADER include/spdk/iscsi_spec.h 00:02:24.996 TEST_HEADER include/spdk/json.h 00:02:24.996 TEST_HEADER include/spdk/ioat.h 00:02:24.996 TEST_HEADER include/spdk/keyring.h 00:02:24.996 TEST_HEADER include/spdk/jsonrpc.h 00:02:24.996 TEST_HEADER include/spdk/keyring_module.h 00:02:24.996 TEST_HEADER include/spdk/likely.h 00:02:24.996 CC app/nvmf_tgt/nvmf_main.o 00:02:24.996 TEST_HEADER include/spdk/log.h 00:02:24.996 TEST_HEADER include/spdk/lvol.h 00:02:24.996 TEST_HEADER include/spdk/memory.h 00:02:24.996 TEST_HEADER include/spdk/mmio.h 00:02:24.996 TEST_HEADER include/spdk/nbd.h 00:02:24.996 TEST_HEADER include/spdk/notify.h 00:02:24.996 TEST_HEADER include/spdk/nvme.h 00:02:24.996 TEST_HEADER include/spdk/nvme_intel.h 00:02:24.996 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:24.996 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:24.996 TEST_HEADER include/spdk/nvme_spec.h 00:02:24.996 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:24.996 TEST_HEADER include/spdk/nvme_zns.h 00:02:24.996 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:24.996 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:25.258 TEST_HEADER include/spdk/nvmf_spec.h 00:02:25.259 TEST_HEADER include/spdk/nvmf.h 00:02:25.259 TEST_HEADER include/spdk/nvmf_transport.h 00:02:25.259 TEST_HEADER include/spdk/opal.h 00:02:25.259 TEST_HEADER include/spdk/pci_ids.h 00:02:25.259 TEST_HEADER include/spdk/opal_spec.h 00:02:25.259 TEST_HEADER include/spdk/pipe.h 00:02:25.259 TEST_HEADER include/spdk/queue.h 00:02:25.259 TEST_HEADER include/spdk/reduce.h 00:02:25.259 TEST_HEADER include/spdk/rpc.h 00:02:25.259 TEST_HEADER include/spdk/scheduler.h 00:02:25.259 TEST_HEADER include/spdk/scsi_spec.h 00:02:25.259 TEST_HEADER include/spdk/scsi.h 00:02:25.259 TEST_HEADER include/spdk/sock.h 00:02:25.259 TEST_HEADER include/spdk/string.h 00:02:25.259 TEST_HEADER include/spdk/stdinc.h 00:02:25.259 TEST_HEADER include/spdk/thread.h 00:02:25.259 TEST_HEADER include/spdk/trace.h 00:02:25.259 CC app/iscsi_tgt/iscsi_tgt.o 00:02:25.259 TEST_HEADER include/spdk/tree.h 00:02:25.259 TEST_HEADER include/spdk/trace_parser.h 00:02:25.259 TEST_HEADER include/spdk/ublk.h 00:02:25.259 TEST_HEADER include/spdk/util.h 00:02:25.259 TEST_HEADER include/spdk/uuid.h 00:02:25.259 TEST_HEADER include/spdk/version.h 00:02:25.259 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:25.259 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:25.259 TEST_HEADER include/spdk/vhost.h 00:02:25.259 TEST_HEADER include/spdk/vmd.h 00:02:25.259 TEST_HEADER include/spdk/xor.h 00:02:25.259 TEST_HEADER include/spdk/zipf.h 00:02:25.259 CXX test/cpp_headers/accel.o 00:02:25.259 CXX test/cpp_headers/accel_module.o 00:02:25.259 CXX test/cpp_headers/assert.o 00:02:25.259 CC app/spdk_tgt/spdk_tgt.o 00:02:25.259 CXX test/cpp_headers/barrier.o 00:02:25.259 CXX test/cpp_headers/base64.o 00:02:25.259 CXX test/cpp_headers/bdev.o 00:02:25.259 CXX test/cpp_headers/bdev_zone.o 00:02:25.259 CXX test/cpp_headers/bit_array.o 00:02:25.259 CXX test/cpp_headers/bdev_module.o 00:02:25.259 CXX test/cpp_headers/bit_pool.o 00:02:25.259 CXX test/cpp_headers/blob_bdev.o 00:02:25.259 CXX test/cpp_headers/blobfs_bdev.o 00:02:25.259 CXX test/cpp_headers/blobfs.o 00:02:25.259 CXX test/cpp_headers/blob.o 00:02:25.259 CXX test/cpp_headers/cpuset.o 00:02:25.259 CXX test/cpp_headers/conf.o 00:02:25.259 CXX test/cpp_headers/config.o 00:02:25.259 CXX test/cpp_headers/crc16.o 00:02:25.259 CXX test/cpp_headers/crc64.o 00:02:25.259 CXX test/cpp_headers/dif.o 00:02:25.259 CXX test/cpp_headers/dma.o 00:02:25.259 CXX test/cpp_headers/crc32.o 00:02:25.259 CXX test/cpp_headers/endian.o 00:02:25.259 CXX test/cpp_headers/event.o 00:02:25.259 CXX test/cpp_headers/env.o 00:02:25.259 CXX test/cpp_headers/env_dpdk.o 00:02:25.259 CXX test/cpp_headers/fd.o 00:02:25.259 CXX test/cpp_headers/fd_group.o 00:02:25.259 CXX test/cpp_headers/file.o 00:02:25.259 CXX test/cpp_headers/ftl.o 00:02:25.259 CXX test/cpp_headers/gpt_spec.o 00:02:25.259 CXX test/cpp_headers/hexlify.o 00:02:25.259 CXX test/cpp_headers/histogram_data.o 00:02:25.259 CXX test/cpp_headers/idxd_spec.o 00:02:25.259 CXX test/cpp_headers/idxd.o 00:02:25.259 CXX test/cpp_headers/init.o 00:02:25.259 CXX test/cpp_headers/ioat.o 00:02:25.259 CXX test/cpp_headers/json.o 00:02:25.259 CXX test/cpp_headers/ioat_spec.o 00:02:25.259 CXX test/cpp_headers/iscsi_spec.o 00:02:25.259 CXX test/cpp_headers/jsonrpc.o 00:02:25.259 CXX test/cpp_headers/keyring.o 00:02:25.259 CXX test/cpp_headers/keyring_module.o 00:02:25.259 CXX test/cpp_headers/lvol.o 00:02:25.259 CXX test/cpp_headers/log.o 00:02:25.259 CXX test/cpp_headers/likely.o 00:02:25.259 CXX test/cpp_headers/memory.o 00:02:25.259 CXX test/cpp_headers/mmio.o 00:02:25.259 CXX test/cpp_headers/nbd.o 00:02:25.259 CXX test/cpp_headers/notify.o 00:02:25.259 CXX test/cpp_headers/nvme.o 00:02:25.259 CXX test/cpp_headers/nvme_intel.o 00:02:25.259 CXX test/cpp_headers/nvme_ocssd.o 00:02:25.259 CXX test/cpp_headers/nvme_spec.o 00:02:25.259 CXX test/cpp_headers/nvmf_cmd.o 00:02:25.259 CXX test/cpp_headers/nvme_zns.o 00:02:25.259 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:25.259 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:25.259 CC test/app/jsoncat/jsoncat.o 00:02:25.259 CC examples/util/zipf/zipf.o 00:02:25.259 CC test/app/histogram_perf/histogram_perf.o 00:02:25.259 CC examples/ioat/verify/verify.o 00:02:25.259 CC test/app/stub/stub.o 00:02:25.259 CC examples/ioat/perf/perf.o 00:02:25.259 CXX test/cpp_headers/nvmf.o 00:02:25.259 CC app/fio/nvme/fio_plugin.o 00:02:25.259 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:25.259 CC test/env/vtophys/vtophys.o 00:02:25.259 CC test/env/memory/memory_ut.o 00:02:25.259 CC test/env/pci/pci_ut.o 00:02:25.259 CC test/thread/poller_perf/poller_perf.o 00:02:25.259 CC test/dma/test_dma/test_dma.o 00:02:25.259 CC test/app/bdev_svc/bdev_svc.o 00:02:25.259 LINK spdk_lspci 00:02:25.526 CC app/fio/bdev/fio_plugin.o 00:02:25.526 LINK rpc_client_test 00:02:25.526 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:25.526 LINK nvmf_tgt 00:02:25.787 LINK spdk_nvme_discover 00:02:25.787 CC test/env/mem_callbacks/mem_callbacks.o 00:02:25.787 LINK spdk_trace_record 00:02:25.787 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:25.787 LINK jsoncat 00:02:25.787 LINK interrupt_tgt 00:02:25.787 LINK poller_perf 00:02:25.787 CXX test/cpp_headers/nvmf_spec.o 00:02:25.787 LINK spdk_tgt 00:02:25.787 LINK vtophys 00:02:25.787 LINK stub 00:02:25.787 CXX test/cpp_headers/nvmf_transport.o 00:02:25.787 LINK histogram_perf 00:02:25.787 CXX test/cpp_headers/opal.o 00:02:25.787 CXX test/cpp_headers/opal_spec.o 00:02:25.787 LINK iscsi_tgt 00:02:25.787 CXX test/cpp_headers/pci_ids.o 00:02:25.787 CXX test/cpp_headers/pipe.o 00:02:25.787 CXX test/cpp_headers/queue.o 00:02:25.787 CXX test/cpp_headers/reduce.o 00:02:25.787 CXX test/cpp_headers/rpc.o 00:02:25.787 CXX test/cpp_headers/scheduler.o 00:02:26.046 CXX test/cpp_headers/scsi_spec.o 00:02:26.046 CXX test/cpp_headers/scsi.o 00:02:26.046 CXX test/cpp_headers/sock.o 00:02:26.046 CXX test/cpp_headers/stdinc.o 00:02:26.046 CXX test/cpp_headers/string.o 00:02:26.046 CXX test/cpp_headers/thread.o 00:02:26.046 CXX test/cpp_headers/trace.o 00:02:26.046 CXX test/cpp_headers/trace_parser.o 00:02:26.046 CXX test/cpp_headers/tree.o 00:02:26.046 CXX test/cpp_headers/ublk.o 00:02:26.046 CXX test/cpp_headers/util.o 00:02:26.046 CXX test/cpp_headers/uuid.o 00:02:26.046 CXX test/cpp_headers/version.o 00:02:26.046 CXX test/cpp_headers/vfio_user_pci.o 00:02:26.046 CXX test/cpp_headers/vfio_user_spec.o 00:02:26.046 CXX test/cpp_headers/vhost.o 00:02:26.046 CXX test/cpp_headers/vmd.o 00:02:26.046 CXX test/cpp_headers/xor.o 00:02:26.046 CXX test/cpp_headers/zipf.o 00:02:26.046 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:26.046 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:26.046 LINK bdev_svc 00:02:26.046 LINK verify 00:02:26.046 LINK spdk_trace 00:02:26.046 LINK zipf 00:02:26.046 LINK spdk_dd 00:02:26.303 LINK env_dpdk_post_init 00:02:26.303 LINK ioat_perf 00:02:26.303 LINK test_dma 00:02:26.303 LINK pci_ut 00:02:26.303 CC test/event/event_perf/event_perf.o 00:02:26.303 CC test/event/reactor_perf/reactor_perf.o 00:02:26.303 CC test/event/reactor/reactor.o 00:02:26.303 LINK nvme_fuzz 00:02:26.303 CC test/event/app_repeat/app_repeat.o 00:02:26.561 CC test/event/scheduler/scheduler.o 00:02:26.561 LINK spdk_nvme 00:02:26.561 LINK spdk_bdev 00:02:26.561 LINK spdk_top 00:02:26.561 CC app/vhost/vhost.o 00:02:26.561 LINK mem_callbacks 00:02:26.561 LINK reactor 00:02:26.561 LINK reactor_perf 00:02:26.561 LINK event_perf 00:02:26.561 LINK vhost_fuzz 00:02:26.561 LINK spdk_nvme_perf 00:02:26.561 LINK app_repeat 00:02:26.561 CC examples/vmd/lsvmd/lsvmd.o 00:02:26.561 CC examples/vmd/led/led.o 00:02:26.561 CC examples/idxd/perf/perf.o 00:02:26.561 CC examples/sock/hello_world/hello_sock.o 00:02:26.819 CC examples/thread/thread/thread_ex.o 00:02:26.819 LINK scheduler 00:02:26.819 CC test/nvme/sgl/sgl.o 00:02:26.819 CC test/nvme/e2edp/nvme_dp.o 00:02:26.819 CC test/nvme/aer/aer.o 00:02:26.819 CC test/nvme/reset/reset.o 00:02:26.819 CC test/nvme/fused_ordering/fused_ordering.o 00:02:26.819 CC test/nvme/simple_copy/simple_copy.o 00:02:26.819 CC test/nvme/fdp/fdp.o 00:02:26.819 CC test/nvme/boot_partition/boot_partition.o 00:02:26.819 LINK vhost 00:02:26.819 CC test/nvme/err_injection/err_injection.o 00:02:26.819 CC test/nvme/compliance/nvme_compliance.o 00:02:26.819 CC test/nvme/cuse/cuse.o 00:02:26.819 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:26.819 CC test/nvme/reserve/reserve.o 00:02:26.819 CC test/nvme/overhead/overhead.o 00:02:26.819 CC test/nvme/connect_stress/connect_stress.o 00:02:26.819 CC test/nvme/startup/startup.o 00:02:26.819 LINK lsvmd 00:02:26.819 LINK memory_ut 00:02:26.819 CC test/blobfs/mkfs/mkfs.o 00:02:26.819 LINK led 00:02:26.819 CC test/accel/dif/dif.o 00:02:26.819 CC test/lvol/esnap/esnap.o 00:02:27.077 LINK hello_sock 00:02:27.077 LINK startup 00:02:27.077 LINK err_injection 00:02:27.077 LINK thread 00:02:27.077 LINK boot_partition 00:02:27.077 LINK fused_ordering 00:02:27.077 LINK doorbell_aers 00:02:27.077 LINK connect_stress 00:02:27.077 LINK reserve 00:02:27.077 LINK simple_copy 00:02:27.077 LINK overhead 00:02:27.077 LINK sgl 00:02:27.077 LINK reset 00:02:27.077 LINK mkfs 00:02:27.077 LINK nvme_dp 00:02:27.077 LINK aer 00:02:27.077 LINK spdk_nvme_identify 00:02:27.077 LINK nvme_compliance 00:02:27.077 LINK fdp 00:02:27.336 LINK idxd_perf 00:02:27.336 LINK dif 00:02:27.336 CC examples/nvme/abort/abort.o 00:02:27.336 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:27.336 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:27.336 CC examples/nvme/reconnect/reconnect.o 00:02:27.600 CC examples/nvme/hotplug/hotplug.o 00:02:27.600 CC examples/nvme/arbitration/arbitration.o 00:02:27.600 CC examples/nvme/hello_world/hello_world.o 00:02:27.600 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:27.600 CC examples/accel/perf/accel_perf.o 00:02:27.600 CC examples/blob/cli/blobcli.o 00:02:27.600 CC examples/blob/hello_world/hello_blob.o 00:02:27.600 LINK pmr_persistence 00:02:27.600 LINK iscsi_fuzz 00:02:27.600 LINK cmb_copy 00:02:27.917 LINK hello_world 00:02:27.917 LINK hotplug 00:02:27.917 LINK reconnect 00:02:27.917 LINK abort 00:02:27.917 LINK arbitration 00:02:27.917 LINK hello_blob 00:02:27.917 LINK nvme_manage 00:02:27.917 CC test/bdev/bdevio/bdevio.o 00:02:28.204 LINK accel_perf 00:02:28.204 LINK cuse 00:02:28.463 LINK bdevio 00:02:28.721 LINK blobcli 00:02:28.721 CC examples/bdev/bdevperf/bdevperf.o 00:02:28.721 CC examples/bdev/hello_world/hello_bdev.o 00:02:28.980 LINK hello_bdev 00:02:29.547 LINK bdevperf 00:02:30.114 CC examples/nvmf/nvmf/nvmf.o 00:02:30.373 LINK nvmf 00:02:32.276 LINK esnap 00:02:32.535 00:02:32.535 real 1m29.990s 00:02:32.535 user 18m28.670s 00:02:32.535 sys 4m28.392s 00:02:32.535 18:17:18 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:32.535 18:17:18 make -- common/autotest_common.sh@10 -- $ set +x 00:02:32.535 ************************************ 00:02:32.535 END TEST make 00:02:32.535 ************************************ 00:02:32.535 18:17:18 -- common/autotest_common.sh@1142 -- $ return 0 00:02:32.535 18:17:18 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:32.535 18:17:18 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:32.535 18:17:18 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:32.535 18:17:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.535 18:17:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:32.535 18:17:18 -- pm/common@44 -- $ pid=2582775 00:02:32.535 18:17:18 -- pm/common@50 -- $ kill -TERM 2582775 00:02:32.535 18:17:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.535 18:17:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:32.535 18:17:18 -- pm/common@44 -- $ pid=2582776 00:02:32.535 18:17:18 -- pm/common@50 -- $ kill -TERM 2582776 00:02:32.535 18:17:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.535 18:17:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:32.535 18:17:18 -- pm/common@44 -- $ pid=2582779 00:02:32.535 18:17:18 -- pm/common@50 -- $ kill -TERM 2582779 00:02:32.535 18:17:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.535 18:17:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:32.535 18:17:18 -- pm/common@44 -- $ pid=2582802 00:02:32.535 18:17:18 -- pm/common@50 -- $ sudo -E kill -TERM 2582802 00:02:32.794 18:17:18 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:02:32.794 18:17:18 -- nvmf/common.sh@7 -- # uname -s 00:02:32.794 18:17:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:32.794 18:17:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:32.794 18:17:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:32.794 18:17:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:32.794 18:17:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:32.794 18:17:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:32.794 18:17:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:32.794 18:17:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:32.794 18:17:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:32.794 18:17:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:32.794 18:17:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b98b40-9a1d-eb11-906e-0017a4403562 00:02:32.794 18:17:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b98b40-9a1d-eb11-906e-0017a4403562 00:02:32.794 18:17:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:32.794 18:17:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:32.794 18:17:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:32.794 18:17:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:32.794 18:17:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:02:32.794 18:17:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:32.794 18:17:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:32.794 18:17:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:32.794 18:17:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.794 18:17:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.794 18:17:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.794 18:17:18 -- paths/export.sh@5 -- # export PATH 00:02:32.794 18:17:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.794 18:17:18 -- nvmf/common.sh@47 -- # : 0 00:02:32.794 18:17:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:32.794 18:17:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:32.794 18:17:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:32.794 18:17:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:32.794 18:17:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:32.794 18:17:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:32.794 18:17:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:32.794 18:17:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:32.794 18:17:18 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:32.794 18:17:18 -- spdk/autotest.sh@32 -- # uname -s 00:02:32.794 18:17:18 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:32.794 18:17:18 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:32.794 18:17:18 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:02:32.794 18:17:18 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:32.794 18:17:18 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:02:32.794 18:17:18 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:32.794 18:17:18 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:32.794 18:17:18 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:32.794 18:17:18 -- spdk/autotest.sh@48 -- # udevadm_pid=2651995 00:02:32.794 18:17:18 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:32.794 18:17:18 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:32.794 18:17:18 -- pm/common@17 -- # local monitor 00:02:32.794 18:17:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.794 18:17:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.794 18:17:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.794 18:17:18 -- pm/common@21 -- # date +%s 00:02:32.794 18:17:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.794 18:17:18 -- pm/common@21 -- # date +%s 00:02:32.794 18:17:18 -- pm/common@25 -- # sleep 1 00:02:32.794 18:17:18 -- pm/common@21 -- # date +%s 00:02:32.794 18:17:18 -- pm/common@21 -- # date +%s 00:02:32.794 18:17:18 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721060238 00:02:32.794 18:17:18 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721060238 00:02:32.794 18:17:18 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721060238 00:02:32.794 18:17:18 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721060238 00:02:32.794 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721060238_collect-vmstat.pm.log 00:02:32.794 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721060238_collect-cpu-load.pm.log 00:02:32.794 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721060238_collect-cpu-temp.pm.log 00:02:32.794 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721060238_collect-bmc-pm.bmc.pm.log 00:02:33.731 18:17:19 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:33.731 18:17:19 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:33.731 18:17:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:33.731 18:17:19 -- common/autotest_common.sh@10 -- # set +x 00:02:33.731 18:17:19 -- spdk/autotest.sh@59 -- # create_test_list 00:02:33.731 18:17:19 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:33.731 18:17:19 -- common/autotest_common.sh@10 -- # set +x 00:02:33.731 18:17:19 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/autotest.sh 00:02:33.731 18:17:19 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:33.731 18:17:19 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:33.990 18:17:19 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:02:33.990 18:17:19 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:33.990 18:17:19 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:33.990 18:17:19 -- common/autotest_common.sh@1455 -- # uname 00:02:33.990 18:17:19 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:33.990 18:17:19 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:33.990 18:17:19 -- common/autotest_common.sh@1475 -- # uname 00:02:33.990 18:17:19 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:33.990 18:17:19 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:33.990 18:17:19 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:33.990 18:17:19 -- spdk/autotest.sh@72 -- # hash lcov 00:02:33.990 18:17:19 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:33.990 18:17:19 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:33.990 --rc lcov_branch_coverage=1 00:02:33.990 --rc lcov_function_coverage=1 00:02:33.990 --rc genhtml_branch_coverage=1 00:02:33.990 --rc genhtml_function_coverage=1 00:02:33.990 --rc genhtml_legend=1 00:02:33.990 --rc geninfo_all_blocks=1 00:02:33.990 ' 00:02:33.990 18:17:19 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:33.990 --rc lcov_branch_coverage=1 00:02:33.990 --rc lcov_function_coverage=1 00:02:33.990 --rc genhtml_branch_coverage=1 00:02:33.990 --rc genhtml_function_coverage=1 00:02:33.990 --rc genhtml_legend=1 00:02:33.990 --rc geninfo_all_blocks=1 00:02:33.990 ' 00:02:33.990 18:17:19 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:33.990 --rc lcov_branch_coverage=1 00:02:33.990 --rc lcov_function_coverage=1 00:02:33.990 --rc genhtml_branch_coverage=1 00:02:33.990 --rc genhtml_function_coverage=1 00:02:33.990 --rc genhtml_legend=1 00:02:33.990 --rc geninfo_all_blocks=1 00:02:33.990 --no-external' 00:02:33.990 18:17:19 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:33.990 --rc lcov_branch_coverage=1 00:02:33.990 --rc lcov_function_coverage=1 00:02:33.990 --rc genhtml_branch_coverage=1 00:02:33.990 --rc genhtml_function_coverage=1 00:02:33.990 --rc genhtml_legend=1 00:02:33.990 --rc geninfo_all_blocks=1 00:02:33.990 --no-external' 00:02:33.990 18:17:19 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:33.990 lcov: LCOV version 1.14 00:02:33.990 18:17:19 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/crypto-phy-autotest/spdk -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info 00:02:35.893 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:35.893 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:35.893 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:35.893 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:35.893 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:35.893 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:35.893 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:35.893 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:35.893 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:35.893 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:35.893 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:35.893 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:35.893 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:35.893 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:35.893 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:35.893 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:35.893 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:35.893 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:35.893 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:35.893 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:35.893 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:35.893 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:35.893 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:35.893 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:35.893 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:35.893 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:35.893 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:35.893 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:35.894 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:35.894 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:36.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:36.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:36.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:36.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:36.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:36.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:36.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:36.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:36.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:36.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:36.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:36.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:36.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:36.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:36.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:36.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:36.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:36.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:36.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:36.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:36.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:36.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:36.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:36.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:36.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:36.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:36.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:36.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:36.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:36.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:36.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:36.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:36.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:36.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:36.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:36.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:36.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:36.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:36.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:36.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:36.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:36.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:36.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:36.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:36.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:36.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:36.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:36.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:36.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:36.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:36.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:36.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:36.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:36.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:36.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:36.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:36.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:36.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:36.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:36.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:36.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:36.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:36.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:36.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:36.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:36.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:36.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:36.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:36.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:36.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:36.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:36.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:36.412 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:36.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:36.412 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:36.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:36.412 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:36.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:36.412 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:36.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:54.508 /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:54.508 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:12.622 18:17:55 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:12.622 18:17:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:12.622 18:17:55 -- common/autotest_common.sh@10 -- # set +x 00:03:12.622 18:17:55 -- spdk/autotest.sh@91 -- # rm -f 00:03:12.622 18:17:55 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:12.881 0000:5f:00.0 (1b96 2600): Already using the nvme driver 00:03:12.881 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:13.139 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:13.139 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:13.139 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:13.139 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:13.139 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:13.139 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:13.139 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:13.139 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:13.139 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:13.139 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:13.139 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:13.139 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:13.139 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:13.398 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:13.398 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:13.398 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:13.398 18:17:58 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:13.398 18:17:58 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:13.398 18:17:58 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:13.398 18:17:58 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:13.399 18:17:58 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:13.399 18:17:58 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:13.399 18:17:58 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:13.399 18:17:58 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:13.399 18:17:58 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:13.399 18:17:58 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:13.399 18:17:58 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:03:13.399 18:17:58 -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:03:13.399 18:17:58 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:03:13.399 18:17:58 -- common/autotest_common.sh@1665 -- # [[ host-managed != none ]] 00:03:13.399 18:17:58 -- common/autotest_common.sh@1674 -- # zoned_devs["${nvme##*/}"]=0000:5f:00.0 00:03:13.399 18:17:58 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:13.399 18:17:58 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:13.399 18:17:58 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:13.399 18:17:58 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:13.399 18:17:58 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:13.399 18:17:58 -- spdk/autotest.sh@98 -- # (( 1 > 0 )) 00:03:13.399 18:17:58 -- spdk/autotest.sh@103 -- # export PCI_BLOCKED=0000:5f:00.0 00:03:13.399 18:17:58 -- spdk/autotest.sh@103 -- # PCI_BLOCKED=0000:5f:00.0 00:03:13.399 18:17:58 -- spdk/autotest.sh@104 -- # export PCI_ZONED=0000:5f:00.0 00:03:13.399 18:17:58 -- spdk/autotest.sh@104 -- # PCI_ZONED=0000:5f:00.0 00:03:13.399 18:17:58 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:13.399 18:17:58 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:13.399 18:17:58 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:13.399 18:17:58 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:13.399 18:17:58 -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:13.399 No valid GPT data, bailing 00:03:13.399 18:17:58 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:13.399 18:17:58 -- scripts/common.sh@391 -- # pt= 00:03:13.399 18:17:58 -- scripts/common.sh@392 -- # return 1 00:03:13.399 18:17:58 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:13.399 1+0 records in 00:03:13.399 1+0 records out 00:03:13.399 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00606331 s, 173 MB/s 00:03:13.399 18:17:58 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:13.399 18:17:58 -- spdk/autotest.sh@112 -- # [[ -z 0000:5f:00.0 ]] 00:03:13.399 18:17:58 -- spdk/autotest.sh@112 -- # continue 00:03:13.399 18:17:58 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:13.399 18:17:58 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:13.399 18:17:58 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:13.399 18:17:58 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:13.399 18:17:58 -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:13.657 No valid GPT data, bailing 00:03:13.657 18:17:58 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:13.657 18:17:58 -- scripts/common.sh@391 -- # pt= 00:03:13.657 18:17:58 -- scripts/common.sh@392 -- # return 1 00:03:13.657 18:17:58 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:13.657 1+0 records in 00:03:13.657 1+0 records out 00:03:13.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00533394 s, 197 MB/s 00:03:13.657 18:17:58 -- spdk/autotest.sh@118 -- # sync 00:03:13.657 18:17:58 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:13.657 18:17:58 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:13.657 18:17:58 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:18.925 18:18:03 -- spdk/autotest.sh@124 -- # uname -s 00:03:18.925 18:18:03 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:18.925 18:18:03 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:03:18.925 18:18:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:18.925 18:18:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:18.925 18:18:03 -- common/autotest_common.sh@10 -- # set +x 00:03:18.925 ************************************ 00:03:18.925 START TEST setup.sh 00:03:18.925 ************************************ 00:03:18.925 18:18:03 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:03:18.925 * Looking for test storage... 00:03:18.925 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:03:18.925 18:18:03 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:18.925 18:18:03 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:18.925 18:18:03 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:03:18.925 18:18:03 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:18.925 18:18:03 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:18.925 18:18:03 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:18.925 ************************************ 00:03:18.925 START TEST acl 00:03:18.925 ************************************ 00:03:18.925 18:18:03 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:03:18.925 * Looking for test storage... 00:03:18.925 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:03:18.925 18:18:04 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:18.925 18:18:04 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:18.925 18:18:04 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:18.925 18:18:04 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:18.925 18:18:04 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:18.925 18:18:04 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:18.925 18:18:04 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:18.925 18:18:04 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:18.925 18:18:04 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:18.925 18:18:04 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:18.925 18:18:04 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:03:18.925 18:18:04 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:03:18.925 18:18:04 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:03:18.925 18:18:04 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ host-managed != none ]] 00:03:18.925 18:18:04 setup.sh.acl -- common/autotest_common.sh@1674 -- # zoned_devs["${nvme##*/}"]=0000:5f:00.0 00:03:18.925 18:18:04 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:18.925 18:18:04 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:18.925 18:18:04 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:18.925 18:18:04 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:18.925 18:18:04 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:18.925 18:18:04 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:18.925 18:18:04 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:18.925 18:18:04 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:18.925 18:18:04 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:18.925 18:18:04 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:18.925 18:18:04 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:18.925 18:18:04 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:23.150 18:18:07 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:23.150 18:18:07 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:23.150 18:18:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.150 18:18:07 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:23.150 18:18:07 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.151 18:18:07 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:03:25.715 Hugepages 00:03:25.715 node hugesize free / total 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.715 00:03:25.715 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@21 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:25.715 18:18:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5f:00.0 == *:*:*.* ]] 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@21 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@21 -- # continue 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:25.982 18:18:11 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:25.982 18:18:11 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:25.982 18:18:11 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.982 18:18:11 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:25.982 ************************************ 00:03:25.982 START TEST denied 00:03:25.982 ************************************ 00:03:25.982 18:18:11 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:25.982 18:18:11 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED='0000:5f:00.0 0000:5e:00.0' 00:03:25.982 18:18:11 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:25.982 18:18:11 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:03:25.982 18:18:11 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.982 18:18:11 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:03:30.174 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:03:30.174 18:18:14 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:03:30.174 18:18:14 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:30.174 18:18:14 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:30.174 18:18:14 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:03:30.174 18:18:14 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:03:30.174 18:18:14 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:30.174 18:18:14 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:30.174 18:18:14 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:30.174 18:18:14 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:30.174 18:18:14 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:34.366 00:03:34.366 real 0m8.117s 00:03:34.366 user 0m2.793s 00:03:34.366 sys 0m4.591s 00:03:34.366 18:18:19 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:34.366 18:18:19 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:34.366 ************************************ 00:03:34.366 END TEST denied 00:03:34.366 ************************************ 00:03:34.366 18:18:19 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:34.366 18:18:19 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:34.366 18:18:19 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:34.366 18:18:19 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.366 18:18:19 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:34.366 ************************************ 00:03:34.366 START TEST allowed 00:03:34.366 ************************************ 00:03:34.366 18:18:19 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:34.366 18:18:19 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:34.366 18:18:19 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:34.366 18:18:19 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:34.366 18:18:19 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.366 18:18:19 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:03:38.637 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:38.637 18:18:24 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:38.637 18:18:24 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:38.637 18:18:24 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:38.637 18:18:24 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:38.637 18:18:24 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:42.831 00:03:42.831 real 0m8.165s 00:03:42.831 user 0m2.730s 00:03:42.831 sys 0m4.552s 00:03:42.831 18:18:27 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.831 18:18:27 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:42.831 ************************************ 00:03:42.831 END TEST allowed 00:03:42.831 ************************************ 00:03:42.831 18:18:27 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:42.831 00:03:42.831 real 0m23.749s 00:03:42.831 user 0m8.345s 00:03:42.831 sys 0m14.010s 00:03:42.831 18:18:27 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.831 18:18:27 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:42.831 ************************************ 00:03:42.831 END TEST acl 00:03:42.831 ************************************ 00:03:42.831 18:18:27 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:42.831 18:18:27 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:03:42.831 18:18:27 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.831 18:18:27 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.831 18:18:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:42.831 ************************************ 00:03:42.831 START TEST hugepages 00:03:42.831 ************************************ 00:03:42.831 18:18:27 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:03:42.831 * Looking for test storage... 00:03:42.831 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292904 kB' 'MemFree: 74096940 kB' 'MemAvailable: 78779972 kB' 'Buffers: 16508 kB' 'Cached: 10830228 kB' 'SwapCached: 0 kB' 'Active: 6774924 kB' 'Inactive: 4612004 kB' 'Active(anon): 6386176 kB' 'Inactive(anon): 0 kB' 'Active(file): 388748 kB' 'Inactive(file): 4612004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543440 kB' 'Mapped: 167120 kB' 'Shmem: 5845984 kB' 'KReclaimable: 518472 kB' 'Slab: 1036368 kB' 'SReclaimable: 518472 kB' 'SUnreclaim: 517896 kB' 'KernelStack: 19296 kB' 'PageTables: 8464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52437904 kB' 'Committed_AS: 7851280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 210900 kB' 'VmallocChunk: 0 kB' 'Percpu: 107904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1403860 kB' 'DirectMap2M: 18198528 kB' 'DirectMap1G: 81788928 kB' 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.831 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.832 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:42.833 18:18:27 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:42.833 18:18:27 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.833 18:18:27 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.833 18:18:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:42.833 ************************************ 00:03:42.833 START TEST default_setup 00:03:42.833 ************************************ 00:03:42.833 18:18:27 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:42.833 18:18:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:42.833 18:18:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:42.833 18:18:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:42.833 18:18:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:42.833 18:18:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:42.833 18:18:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:42.833 18:18:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:42.833 18:18:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:42.833 18:18:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:42.833 18:18:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:42.833 18:18:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.833 18:18:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:42.833 18:18:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:42.833 18:18:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.833 18:18:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.833 18:18:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:42.833 18:18:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:42.833 18:18:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:42.833 18:18:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:42.833 18:18:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:42.833 18:18:27 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.833 18:18:27 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:45.367 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:45.935 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:45.935 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:45.935 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:45.935 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:45.935 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:45.935 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:45.935 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:45.935 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:45.935 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:45.935 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:45.935 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:45.935 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:45.935 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:45.935 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:45.935 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:45.935 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:46.877 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292904 kB' 'MemFree: 76247084 kB' 'MemAvailable: 80929532 kB' 'Buffers: 16508 kB' 'Cached: 10830340 kB' 'SwapCached: 0 kB' 'Active: 6790156 kB' 'Inactive: 4612004 kB' 'Active(anon): 6401408 kB' 'Inactive(anon): 0 kB' 'Active(file): 388748 kB' 'Inactive(file): 4612004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557504 kB' 'Mapped: 167372 kB' 'Shmem: 5846096 kB' 'KReclaimable: 517888 kB' 'Slab: 1034616 kB' 'SReclaimable: 517888 kB' 'SUnreclaim: 516728 kB' 'KernelStack: 19488 kB' 'PageTables: 8840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486480 kB' 'Committed_AS: 7871324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211140 kB' 'VmallocChunk: 0 kB' 'Percpu: 107904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1403860 kB' 'DirectMap2M: 18198528 kB' 'DirectMap1G: 81788928 kB' 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.140 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.141 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292904 kB' 'MemFree: 76248620 kB' 'MemAvailable: 80931068 kB' 'Buffers: 16508 kB' 'Cached: 10830344 kB' 'SwapCached: 0 kB' 'Active: 6789396 kB' 'Inactive: 4612004 kB' 'Active(anon): 6400648 kB' 'Inactive(anon): 0 kB' 'Active(file): 388748 kB' 'Inactive(file): 4612004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557760 kB' 'Mapped: 167208 kB' 'Shmem: 5846100 kB' 'KReclaimable: 517888 kB' 'Slab: 1034668 kB' 'SReclaimable: 517888 kB' 'SUnreclaim: 516780 kB' 'KernelStack: 19456 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486480 kB' 'Committed_AS: 7871340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211140 kB' 'VmallocChunk: 0 kB' 'Percpu: 107904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1403860 kB' 'DirectMap2M: 18198528 kB' 'DirectMap1G: 81788928 kB' 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.142 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.143 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292904 kB' 'MemFree: 76248276 kB' 'MemAvailable: 80930724 kB' 'Buffers: 16508 kB' 'Cached: 10830364 kB' 'SwapCached: 0 kB' 'Active: 6789100 kB' 'Inactive: 4612004 kB' 'Active(anon): 6400352 kB' 'Inactive(anon): 0 kB' 'Active(file): 388748 kB' 'Inactive(file): 4612004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557460 kB' 'Mapped: 167208 kB' 'Shmem: 5846120 kB' 'KReclaimable: 517888 kB' 'Slab: 1034636 kB' 'SReclaimable: 517888 kB' 'SUnreclaim: 516748 kB' 'KernelStack: 19456 kB' 'PageTables: 8448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486480 kB' 'Committed_AS: 7871364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211124 kB' 'VmallocChunk: 0 kB' 'Percpu: 107904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1403860 kB' 'DirectMap2M: 18198528 kB' 'DirectMap1G: 81788928 kB' 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.144 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:47.145 nr_hugepages=1024 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.145 resv_hugepages=0 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.145 surplus_hugepages=0 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.145 anon_hugepages=0 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.145 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292904 kB' 'MemFree: 76249628 kB' 'MemAvailable: 80932076 kB' 'Buffers: 16508 kB' 'Cached: 10830384 kB' 'SwapCached: 0 kB' 'Active: 6789252 kB' 'Inactive: 4612004 kB' 'Active(anon): 6400504 kB' 'Inactive(anon): 0 kB' 'Active(file): 388748 kB' 'Inactive(file): 4612004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557604 kB' 'Mapped: 167208 kB' 'Shmem: 5846140 kB' 'KReclaimable: 517888 kB' 'Slab: 1034636 kB' 'SReclaimable: 517888 kB' 'SUnreclaim: 516748 kB' 'KernelStack: 19472 kB' 'PageTables: 8740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486480 kB' 'Committed_AS: 7871136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211204 kB' 'VmallocChunk: 0 kB' 'Percpu: 107904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1403860 kB' 'DirectMap2M: 18198528 kB' 'DirectMap1G: 81788928 kB' 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.146 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48118524 kB' 'MemFree: 36287712 kB' 'MemUsed: 11830812 kB' 'SwapCached: 0 kB' 'Active: 4966364 kB' 'Inactive: 3555560 kB' 'Active(anon): 4796612 kB' 'Inactive(anon): 0 kB' 'Active(file): 169752 kB' 'Inactive(file): 3555560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8254884 kB' 'Mapped: 73412 kB' 'AnonPages: 269724 kB' 'Shmem: 4529572 kB' 'KernelStack: 10392 kB' 'PageTables: 3468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 325324 kB' 'Slab: 601808 kB' 'SReclaimable: 325324 kB' 'SUnreclaim: 276484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.147 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:47.148 18:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:47.149 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.149 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.149 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.149 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.149 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:47.149 node0=1024 expecting 1024 00:03:47.149 18:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:47.149 00:03:47.149 real 0m4.659s 00:03:47.149 user 0m1.561s 00:03:47.149 sys 0m2.305s 00:03:47.149 18:18:32 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:47.149 18:18:32 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:47.149 ************************************ 00:03:47.149 END TEST default_setup 00:03:47.149 ************************************ 00:03:47.407 18:18:32 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:47.407 18:18:32 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:47.407 18:18:32 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:47.407 18:18:32 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.407 18:18:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:47.407 ************************************ 00:03:47.407 START TEST per_node_1G_alloc 00:03:47.407 ************************************ 00:03:47.407 18:18:32 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:47.407 18:18:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:47.407 18:18:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:47.407 18:18:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:47.407 18:18:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:47.407 18:18:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:47.407 18:18:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:47.407 18:18:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:47.407 18:18:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:47.407 18:18:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:47.407 18:18:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:47.408 18:18:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:47.408 18:18:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:47.408 18:18:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:47.408 18:18:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:47.408 18:18:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:47.408 18:18:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:47.408 18:18:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:47.408 18:18:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:47.408 18:18:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:47.408 18:18:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:47.408 18:18:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:47.408 18:18:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:47.408 18:18:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:47.408 18:18:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:47.408 18:18:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:47.408 18:18:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.408 18:18:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:50.694 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:50.694 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:50.694 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:50.694 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:50.694 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:50.694 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:50.694 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:50.694 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:50.694 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:50.694 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:50.694 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:50.694 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:50.694 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:50.694 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:50.694 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:50.694 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:50.694 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:50.694 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292904 kB' 'MemFree: 76219852 kB' 'MemAvailable: 80902300 kB' 'Buffers: 16508 kB' 'Cached: 10830480 kB' 'SwapCached: 0 kB' 'Active: 6793984 kB' 'Inactive: 4612004 kB' 'Active(anon): 6405236 kB' 'Inactive(anon): 0 kB' 'Active(file): 388748 kB' 'Inactive(file): 4612004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561376 kB' 'Mapped: 167012 kB' 'Shmem: 5846236 kB' 'KReclaimable: 517888 kB' 'Slab: 1034412 kB' 'SReclaimable: 517888 kB' 'SUnreclaim: 516524 kB' 'KernelStack: 19392 kB' 'PageTables: 8572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486480 kB' 'Committed_AS: 7866204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 210996 kB' 'VmallocChunk: 0 kB' 'Percpu: 107904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1403860 kB' 'DirectMap2M: 18198528 kB' 'DirectMap1G: 81788928 kB' 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.694 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292904 kB' 'MemFree: 76220024 kB' 'MemAvailable: 80902472 kB' 'Buffers: 16508 kB' 'Cached: 10830484 kB' 'SwapCached: 0 kB' 'Active: 6786908 kB' 'Inactive: 4612004 kB' 'Active(anon): 6398160 kB' 'Inactive(anon): 0 kB' 'Active(file): 388748 kB' 'Inactive(file): 4612004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555268 kB' 'Mapped: 166576 kB' 'Shmem: 5846240 kB' 'KReclaimable: 517888 kB' 'Slab: 1034396 kB' 'SReclaimable: 517888 kB' 'SUnreclaim: 516508 kB' 'KernelStack: 19296 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486480 kB' 'Committed_AS: 7861132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 210948 kB' 'VmallocChunk: 0 kB' 'Percpu: 107904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1403860 kB' 'DirectMap2M: 18198528 kB' 'DirectMap1G: 81788928 kB' 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.695 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.696 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292904 kB' 'MemFree: 76219772 kB' 'MemAvailable: 80902220 kB' 'Buffers: 16508 kB' 'Cached: 10830504 kB' 'SwapCached: 0 kB' 'Active: 6787384 kB' 'Inactive: 4612004 kB' 'Active(anon): 6398636 kB' 'Inactive(anon): 0 kB' 'Active(file): 388748 kB' 'Inactive(file): 4612004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555180 kB' 'Mapped: 166308 kB' 'Shmem: 5846260 kB' 'KReclaimable: 517888 kB' 'Slab: 1034396 kB' 'SReclaimable: 517888 kB' 'SUnreclaim: 516508 kB' 'KernelStack: 19296 kB' 'PageTables: 8148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486480 kB' 'Committed_AS: 7860260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 210964 kB' 'VmallocChunk: 0 kB' 'Percpu: 107904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1403860 kB' 'DirectMap2M: 18198528 kB' 'DirectMap1G: 81788928 kB' 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.697 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.698 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:50.699 nr_hugepages=1024 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:50.699 resv_hugepages=0 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:50.699 surplus_hugepages=0 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:50.699 anon_hugepages=0 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292904 kB' 'MemFree: 76219940 kB' 'MemAvailable: 80902388 kB' 'Buffers: 16508 kB' 'Cached: 10830504 kB' 'SwapCached: 0 kB' 'Active: 6787108 kB' 'Inactive: 4612004 kB' 'Active(anon): 6398360 kB' 'Inactive(anon): 0 kB' 'Active(file): 388748 kB' 'Inactive(file): 4612004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555412 kB' 'Mapped: 166308 kB' 'Shmem: 5846260 kB' 'KReclaimable: 517888 kB' 'Slab: 1034396 kB' 'SReclaimable: 517888 kB' 'SUnreclaim: 516508 kB' 'KernelStack: 19296 kB' 'PageTables: 8148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486480 kB' 'Committed_AS: 7860284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 210964 kB' 'VmallocChunk: 0 kB' 'Percpu: 107904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1403860 kB' 'DirectMap2M: 18198528 kB' 'DirectMap1G: 81788928 kB' 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.699 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.700 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48118524 kB' 'MemFree: 37324568 kB' 'MemUsed: 10793956 kB' 'SwapCached: 0 kB' 'Active: 4965016 kB' 'Inactive: 3555560 kB' 'Active(anon): 4795264 kB' 'Inactive(anon): 0 kB' 'Active(file): 169752 kB' 'Inactive(file): 3555560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8254952 kB' 'Mapped: 73092 kB' 'AnonPages: 268884 kB' 'Shmem: 4529640 kB' 'KernelStack: 10408 kB' 'PageTables: 3504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 325324 kB' 'Slab: 601828 kB' 'SReclaimable: 325324 kB' 'SUnreclaim: 276504 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.701 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.702 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.702 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.702 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.702 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.702 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.702 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.702 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.702 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.702 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.702 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.702 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.702 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.702 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.702 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.702 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.702 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.702 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.702 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.702 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.702 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.962 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44174380 kB' 'MemFree: 38898264 kB' 'MemUsed: 5276116 kB' 'SwapCached: 0 kB' 'Active: 1822568 kB' 'Inactive: 1056444 kB' 'Active(anon): 1603572 kB' 'Inactive(anon): 0 kB' 'Active(file): 218996 kB' 'Inactive(file): 1056444 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2592140 kB' 'Mapped: 93216 kB' 'AnonPages: 286964 kB' 'Shmem: 1316700 kB' 'KernelStack: 8904 kB' 'PageTables: 4768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 192564 kB' 'Slab: 432568 kB' 'SReclaimable: 192564 kB' 'SUnreclaim: 240004 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.963 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:50.964 node0=512 expecting 512 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:50.964 node1=512 expecting 512 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:50.964 00:03:50.964 real 0m3.556s 00:03:50.964 user 0m1.432s 00:03:50.964 sys 0m2.196s 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:50.964 18:18:36 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:50.964 ************************************ 00:03:50.964 END TEST per_node_1G_alloc 00:03:50.964 ************************************ 00:03:50.964 18:18:36 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:50.964 18:18:36 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:50.964 18:18:36 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:50.964 18:18:36 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:50.964 18:18:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:50.964 ************************************ 00:03:50.964 START TEST even_2G_alloc 00:03:50.964 ************************************ 00:03:50.964 18:18:36 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:50.964 18:18:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:50.964 18:18:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:50.964 18:18:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:50.964 18:18:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:50.964 18:18:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:50.964 18:18:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:50.964 18:18:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:50.964 18:18:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:50.964 18:18:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:50.964 18:18:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:50.964 18:18:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:50.964 18:18:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:50.964 18:18:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:50.964 18:18:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:50.964 18:18:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.964 18:18:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:50.964 18:18:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:50.964 18:18:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:50.964 18:18:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.964 18:18:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:50.964 18:18:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:50.964 18:18:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:50.964 18:18:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.964 18:18:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:50.964 18:18:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:50.964 18:18:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:50.964 18:18:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.964 18:18:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:54.252 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:54.252 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:54.252 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:54.252 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:54.252 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:54.252 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:54.252 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:54.252 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:54.252 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:54.252 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:54.252 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:54.252 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:54.252 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:54.252 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:54.252 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:54.252 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:54.252 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:54.252 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:54.252 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:54.252 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:54.252 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:54.252 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:54.252 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:54.252 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:54.252 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:54.252 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:54.252 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:54.252 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:54.252 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.252 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.252 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.252 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.252 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.252 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.252 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.252 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.252 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.252 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292904 kB' 'MemFree: 76220320 kB' 'MemAvailable: 80902768 kB' 'Buffers: 16508 kB' 'Cached: 10830652 kB' 'SwapCached: 0 kB' 'Active: 6787860 kB' 'Inactive: 4612004 kB' 'Active(anon): 6399112 kB' 'Inactive(anon): 0 kB' 'Active(file): 388748 kB' 'Inactive(file): 4612004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556040 kB' 'Mapped: 166404 kB' 'Shmem: 5846408 kB' 'KReclaimable: 517888 kB' 'Slab: 1034160 kB' 'SReclaimable: 517888 kB' 'SUnreclaim: 516272 kB' 'KernelStack: 19328 kB' 'PageTables: 8260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486480 kB' 'Committed_AS: 7861292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211076 kB' 'VmallocChunk: 0 kB' 'Percpu: 107904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1403860 kB' 'DirectMap2M: 18198528 kB' 'DirectMap1G: 81788928 kB' 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.253 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292904 kB' 'MemFree: 76218976 kB' 'MemAvailable: 80901424 kB' 'Buffers: 16508 kB' 'Cached: 10830656 kB' 'SwapCached: 0 kB' 'Active: 6787904 kB' 'Inactive: 4612004 kB' 'Active(anon): 6399156 kB' 'Inactive(anon): 0 kB' 'Active(file): 388748 kB' 'Inactive(file): 4612004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556076 kB' 'Mapped: 166328 kB' 'Shmem: 5846412 kB' 'KReclaimable: 517888 kB' 'Slab: 1034160 kB' 'SReclaimable: 517888 kB' 'SUnreclaim: 516272 kB' 'KernelStack: 19296 kB' 'PageTables: 8164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486480 kB' 'Committed_AS: 7861308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211044 kB' 'VmallocChunk: 0 kB' 'Percpu: 107904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1403860 kB' 'DirectMap2M: 18198528 kB' 'DirectMap1G: 81788928 kB' 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.254 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.255 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.518 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292904 kB' 'MemFree: 76220760 kB' 'MemAvailable: 80903208 kB' 'Buffers: 16508 kB' 'Cached: 10830672 kB' 'SwapCached: 0 kB' 'Active: 6787900 kB' 'Inactive: 4612004 kB' 'Active(anon): 6399152 kB' 'Inactive(anon): 0 kB' 'Active(file): 388748 kB' 'Inactive(file): 4612004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556052 kB' 'Mapped: 166328 kB' 'Shmem: 5846428 kB' 'KReclaimable: 517888 kB' 'Slab: 1034244 kB' 'SReclaimable: 517888 kB' 'SUnreclaim: 516356 kB' 'KernelStack: 19296 kB' 'PageTables: 8176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486480 kB' 'Committed_AS: 7861332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211044 kB' 'VmallocChunk: 0 kB' 'Percpu: 107904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1403860 kB' 'DirectMap2M: 18198528 kB' 'DirectMap1G: 81788928 kB' 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.519 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:54.520 nr_hugepages=1024 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:54.520 resv_hugepages=0 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:54.520 surplus_hugepages=0 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:54.520 anon_hugepages=0 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292904 kB' 'MemFree: 76222276 kB' 'MemAvailable: 80904724 kB' 'Buffers: 16508 kB' 'Cached: 10830712 kB' 'SwapCached: 0 kB' 'Active: 6787732 kB' 'Inactive: 4612004 kB' 'Active(anon): 6398984 kB' 'Inactive(anon): 0 kB' 'Active(file): 388748 kB' 'Inactive(file): 4612004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555812 kB' 'Mapped: 166328 kB' 'Shmem: 5846468 kB' 'KReclaimable: 517888 kB' 'Slab: 1034244 kB' 'SReclaimable: 517888 kB' 'SUnreclaim: 516356 kB' 'KernelStack: 19296 kB' 'PageTables: 8176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486480 kB' 'Committed_AS: 7861352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211044 kB' 'VmallocChunk: 0 kB' 'Percpu: 107904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1403860 kB' 'DirectMap2M: 18198528 kB' 'DirectMap1G: 81788928 kB' 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.520 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.521 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48118524 kB' 'MemFree: 37313988 kB' 'MemUsed: 10804536 kB' 'SwapCached: 0 kB' 'Active: 4965572 kB' 'Inactive: 3555560 kB' 'Active(anon): 4795820 kB' 'Inactive(anon): 0 kB' 'Active(file): 169752 kB' 'Inactive(file): 3555560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8254984 kB' 'Mapped: 73112 kB' 'AnonPages: 269424 kB' 'Shmem: 4529672 kB' 'KernelStack: 10408 kB' 'PageTables: 3452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 325324 kB' 'Slab: 601676 kB' 'SReclaimable: 325324 kB' 'SUnreclaim: 276352 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.522 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.523 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44174380 kB' 'MemFree: 38908588 kB' 'MemUsed: 5265792 kB' 'SwapCached: 0 kB' 'Active: 1822460 kB' 'Inactive: 1056444 kB' 'Active(anon): 1603464 kB' 'Inactive(anon): 0 kB' 'Active(file): 218996 kB' 'Inactive(file): 1056444 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2592260 kB' 'Mapped: 93216 kB' 'AnonPages: 286748 kB' 'Shmem: 1316820 kB' 'KernelStack: 8904 kB' 'PageTables: 4772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 192564 kB' 'Slab: 432568 kB' 'SReclaimable: 192564 kB' 'SUnreclaim: 240004 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.524 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:54.525 node0=512 expecting 512 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:54.525 node1=512 expecting 512 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:54.525 00:03:54.525 real 0m3.606s 00:03:54.525 user 0m1.491s 00:03:54.525 sys 0m2.184s 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:54.525 18:18:39 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:54.525 ************************************ 00:03:54.525 END TEST even_2G_alloc 00:03:54.525 ************************************ 00:03:54.525 18:18:39 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:54.525 18:18:39 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:54.525 18:18:39 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:54.525 18:18:39 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.525 18:18:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:54.525 ************************************ 00:03:54.525 START TEST odd_alloc 00:03:54.525 ************************************ 00:03:54.525 18:18:40 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:54.525 18:18:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:54.525 18:18:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:54.525 18:18:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:54.525 18:18:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:54.525 18:18:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:54.525 18:18:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:54.525 18:18:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:54.525 18:18:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:54.525 18:18:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:54.525 18:18:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:54.525 18:18:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:54.525 18:18:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:54.525 18:18:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:54.525 18:18:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:54.525 18:18:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.525 18:18:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:54.525 18:18:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:54.525 18:18:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:54.525 18:18:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.525 18:18:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:54.525 18:18:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:54.525 18:18:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:54.525 18:18:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.525 18:18:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:54.525 18:18:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:54.525 18:18:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:54.525 18:18:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.525 18:18:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:57.830 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:57.830 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:57.830 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:57.830 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:57.830 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:57.830 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:57.830 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:57.830 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:57.830 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:57.830 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:57.830 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:57.830 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:57.830 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:57.830 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:57.830 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:57.830 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:57.830 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:57.830 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292904 kB' 'MemFree: 76215700 kB' 'MemAvailable: 80898108 kB' 'Buffers: 16508 kB' 'Cached: 10830812 kB' 'SwapCached: 0 kB' 'Active: 6790152 kB' 'Inactive: 4612004 kB' 'Active(anon): 6401404 kB' 'Inactive(anon): 0 kB' 'Active(file): 388748 kB' 'Inactive(file): 4612004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557724 kB' 'Mapped: 166496 kB' 'Shmem: 5846568 kB' 'KReclaimable: 517848 kB' 'Slab: 1034520 kB' 'SReclaimable: 517848 kB' 'SUnreclaim: 516672 kB' 'KernelStack: 19328 kB' 'PageTables: 8304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485456 kB' 'Committed_AS: 7861968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211060 kB' 'VmallocChunk: 0 kB' 'Percpu: 107904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1403860 kB' 'DirectMap2M: 18198528 kB' 'DirectMap1G: 81788928 kB' 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.830 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292904 kB' 'MemFree: 76217240 kB' 'MemAvailable: 80899648 kB' 'Buffers: 16508 kB' 'Cached: 10830812 kB' 'SwapCached: 0 kB' 'Active: 6789524 kB' 'Inactive: 4612004 kB' 'Active(anon): 6400776 kB' 'Inactive(anon): 0 kB' 'Active(file): 388748 kB' 'Inactive(file): 4612004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557508 kB' 'Mapped: 166340 kB' 'Shmem: 5846568 kB' 'KReclaimable: 517848 kB' 'Slab: 1034504 kB' 'SReclaimable: 517848 kB' 'SUnreclaim: 516656 kB' 'KernelStack: 19312 kB' 'PageTables: 8248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485456 kB' 'Committed_AS: 7863264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211060 kB' 'VmallocChunk: 0 kB' 'Percpu: 107904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1403860 kB' 'DirectMap2M: 18198528 kB' 'DirectMap1G: 81788928 kB' 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.831 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.832 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.099 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.099 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.099 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.099 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.099 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.099 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.099 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.099 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.099 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.099 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.099 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.099 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.099 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.099 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.099 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.099 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.099 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.099 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.099 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292904 kB' 'MemFree: 76219344 kB' 'MemAvailable: 80901752 kB' 'Buffers: 16508 kB' 'Cached: 10830832 kB' 'SwapCached: 0 kB' 'Active: 6789108 kB' 'Inactive: 4612004 kB' 'Active(anon): 6400360 kB' 'Inactive(anon): 0 kB' 'Active(file): 388748 kB' 'Inactive(file): 4612004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557060 kB' 'Mapped: 166340 kB' 'Shmem: 5846588 kB' 'KReclaimable: 517848 kB' 'Slab: 1034504 kB' 'SReclaimable: 517848 kB' 'SUnreclaim: 516656 kB' 'KernelStack: 19232 kB' 'PageTables: 7988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485456 kB' 'Committed_AS: 7864676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211092 kB' 'VmallocChunk: 0 kB' 'Percpu: 107904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1403860 kB' 'DirectMap2M: 18198528 kB' 'DirectMap1G: 81788928 kB' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.100 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:58.101 nr_hugepages=1025 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:58.101 resv_hugepages=0 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:58.101 surplus_hugepages=0 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:58.101 anon_hugepages=0 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292904 kB' 'MemFree: 76221064 kB' 'MemAvailable: 80903472 kB' 'Buffers: 16508 kB' 'Cached: 10830852 kB' 'SwapCached: 0 kB' 'Active: 6790308 kB' 'Inactive: 4612004 kB' 'Active(anon): 6401560 kB' 'Inactive(anon): 0 kB' 'Active(file): 388748 kB' 'Inactive(file): 4612004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558244 kB' 'Mapped: 166340 kB' 'Shmem: 5846608 kB' 'KReclaimable: 517848 kB' 'Slab: 1034504 kB' 'SReclaimable: 517848 kB' 'SUnreclaim: 516656 kB' 'KernelStack: 19408 kB' 'PageTables: 8232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485456 kB' 'Committed_AS: 7864836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211172 kB' 'VmallocChunk: 0 kB' 'Percpu: 107904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1403860 kB' 'DirectMap2M: 18198528 kB' 'DirectMap1G: 81788928 kB' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.101 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48118524 kB' 'MemFree: 37298840 kB' 'MemUsed: 10819684 kB' 'SwapCached: 0 kB' 'Active: 4967084 kB' 'Inactive: 3555560 kB' 'Active(anon): 4797332 kB' 'Inactive(anon): 0 kB' 'Active(file): 169752 kB' 'Inactive(file): 3555560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8255020 kB' 'Mapped: 73132 kB' 'AnonPages: 271100 kB' 'Shmem: 4529708 kB' 'KernelStack: 10600 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 325300 kB' 'Slab: 601944 kB' 'SReclaimable: 325300 kB' 'SUnreclaim: 276644 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.102 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44174380 kB' 'MemFree: 38920716 kB' 'MemUsed: 5253664 kB' 'SwapCached: 0 kB' 'Active: 1822404 kB' 'Inactive: 1056444 kB' 'Active(anon): 1603408 kB' 'Inactive(anon): 0 kB' 'Active(file): 218996 kB' 'Inactive(file): 1056444 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2592376 kB' 'Mapped: 93216 kB' 'AnonPages: 286504 kB' 'Shmem: 1316936 kB' 'KernelStack: 8872 kB' 'PageTables: 4676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 192548 kB' 'Slab: 432560 kB' 'SReclaimable: 192548 kB' 'SUnreclaim: 240012 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.103 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:58.104 node0=512 expecting 513 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:58.104 node1=513 expecting 512 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:58.104 00:03:58.104 real 0m3.511s 00:03:58.104 user 0m1.453s 00:03:58.104 sys 0m2.126s 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.104 18:18:43 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:58.104 ************************************ 00:03:58.104 END TEST odd_alloc 00:03:58.104 ************************************ 00:03:58.104 18:18:43 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:58.104 18:18:43 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:58.104 18:18:43 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.104 18:18:43 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.104 18:18:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:58.104 ************************************ 00:03:58.104 START TEST custom_alloc 00:03:58.104 ************************************ 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.104 18:18:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:01.453 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:04:01.453 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:01.453 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:01.453 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:01.453 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:01.453 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:01.453 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:01.453 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:01.453 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:01.453 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:01.453 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:01.453 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:01.453 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:01.453 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:01.453 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:01.453 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:01.453 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:01.453 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:01.453 18:18:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:01.453 18:18:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:01.453 18:18:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:01.453 18:18:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:01.453 18:18:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:01.453 18:18:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:01.453 18:18:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:01.453 18:18:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:01.453 18:18:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.453 18:18:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:01.453 18:18:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.453 18:18:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:01.453 18:18:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.453 18:18:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.453 18:18:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.453 18:18:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.453 18:18:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.453 18:18:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.453 18:18:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.453 18:18:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.453 18:18:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.454 18:18:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292904 kB' 'MemFree: 75199796 kB' 'MemAvailable: 79882204 kB' 'Buffers: 16508 kB' 'Cached: 10830964 kB' 'SwapCached: 0 kB' 'Active: 6792252 kB' 'Inactive: 4612004 kB' 'Active(anon): 6403504 kB' 'Inactive(anon): 0 kB' 'Active(file): 388748 kB' 'Inactive(file): 4612004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559576 kB' 'Mapped: 166948 kB' 'Shmem: 5846720 kB' 'KReclaimable: 517848 kB' 'Slab: 1034044 kB' 'SReclaimable: 517848 kB' 'SUnreclaim: 516196 kB' 'KernelStack: 19328 kB' 'PageTables: 8652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962192 kB' 'Committed_AS: 7865444 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211028 kB' 'VmallocChunk: 0 kB' 'Percpu: 107904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1403860 kB' 'DirectMap2M: 18198528 kB' 'DirectMap1G: 81788928 kB' 00:04:01.454 18:18:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.454 18:18:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.454 18:18:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.454 18:18:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.454 18:18:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.454 18:18:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.454 18:18:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.454 18:18:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.454 18:18:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.454 18:18:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.454 18:18:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.454 18:18:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.454 18:18:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.454 18:18:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.454 18:18:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.454 18:18:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.454 18:18:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.454 18:18:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.454 18:18:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.454 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.454 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.454 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.454 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.454 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.454 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.454 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.454 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.454 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.454 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.454 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.454 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.454 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.454 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.454 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.454 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.454 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.454 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.454 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.454 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.454 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.454 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.719 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292904 kB' 'MemFree: 75197900 kB' 'MemAvailable: 79880308 kB' 'Buffers: 16508 kB' 'Cached: 10830964 kB' 'SwapCached: 0 kB' 'Active: 6795604 kB' 'Inactive: 4612004 kB' 'Active(anon): 6406856 kB' 'Inactive(anon): 0 kB' 'Active(file): 388748 kB' 'Inactive(file): 4612004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563472 kB' 'Mapped: 167272 kB' 'Shmem: 5846720 kB' 'KReclaimable: 517848 kB' 'Slab: 1033996 kB' 'SReclaimable: 517848 kB' 'SUnreclaim: 516148 kB' 'KernelStack: 19328 kB' 'PageTables: 8644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962192 kB' 'Committed_AS: 7868640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211016 kB' 'VmallocChunk: 0 kB' 'Percpu: 107904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1403860 kB' 'DirectMap2M: 18198528 kB' 'DirectMap1G: 81788928 kB' 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.720 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.721 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292904 kB' 'MemFree: 75198560 kB' 'MemAvailable: 79880968 kB' 'Buffers: 16508 kB' 'Cached: 10830984 kB' 'SwapCached: 0 kB' 'Active: 6789956 kB' 'Inactive: 4612004 kB' 'Active(anon): 6401208 kB' 'Inactive(anon): 0 kB' 'Active(file): 388748 kB' 'Inactive(file): 4612004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557736 kB' 'Mapped: 166356 kB' 'Shmem: 5846740 kB' 'KReclaimable: 517848 kB' 'Slab: 1033996 kB' 'SReclaimable: 517848 kB' 'SUnreclaim: 516148 kB' 'KernelStack: 19312 kB' 'PageTables: 8556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962192 kB' 'Committed_AS: 7862544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211012 kB' 'VmallocChunk: 0 kB' 'Percpu: 107904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1403860 kB' 'DirectMap2M: 18198528 kB' 'DirectMap1G: 81788928 kB' 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.722 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.723 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:01.724 nr_hugepages=1536 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:01.724 resv_hugepages=0 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:01.724 surplus_hugepages=0 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:01.724 anon_hugepages=0 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292904 kB' 'MemFree: 75198056 kB' 'MemAvailable: 79880464 kB' 'Buffers: 16508 kB' 'Cached: 10831004 kB' 'SwapCached: 0 kB' 'Active: 6790036 kB' 'Inactive: 4612004 kB' 'Active(anon): 6401288 kB' 'Inactive(anon): 0 kB' 'Active(file): 388748 kB' 'Inactive(file): 4612004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557848 kB' 'Mapped: 166356 kB' 'Shmem: 5846760 kB' 'KReclaimable: 517848 kB' 'Slab: 1033996 kB' 'SReclaimable: 517848 kB' 'SUnreclaim: 516148 kB' 'KernelStack: 19328 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962192 kB' 'Committed_AS: 7862564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211012 kB' 'VmallocChunk: 0 kB' 'Percpu: 107904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1403860 kB' 'DirectMap2M: 18198528 kB' 'DirectMap1G: 81788928 kB' 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.724 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:01.725 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48118524 kB' 'MemFree: 37322176 kB' 'MemUsed: 10796348 kB' 'SwapCached: 0 kB' 'Active: 4967092 kB' 'Inactive: 3555560 kB' 'Active(anon): 4797340 kB' 'Inactive(anon): 0 kB' 'Active(file): 169752 kB' 'Inactive(file): 3555560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8255112 kB' 'Mapped: 73140 kB' 'AnonPages: 270736 kB' 'Shmem: 4529800 kB' 'KernelStack: 10392 kB' 'PageTables: 3792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 325300 kB' 'Slab: 601428 kB' 'SReclaimable: 325300 kB' 'SUnreclaim: 276128 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.726 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44174380 kB' 'MemFree: 37876600 kB' 'MemUsed: 6297780 kB' 'SwapCached: 0 kB' 'Active: 1822924 kB' 'Inactive: 1056444 kB' 'Active(anon): 1603928 kB' 'Inactive(anon): 0 kB' 'Active(file): 218996 kB' 'Inactive(file): 1056444 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2592420 kB' 'Mapped: 93216 kB' 'AnonPages: 287112 kB' 'Shmem: 1316980 kB' 'KernelStack: 8936 kB' 'PageTables: 4820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 192548 kB' 'Slab: 432568 kB' 'SReclaimable: 192548 kB' 'SUnreclaim: 240020 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.727 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:01.728 node0=512 expecting 512 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:01.728 node1=1024 expecting 1024 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:01.728 00:04:01.728 real 0m3.656s 00:04:01.728 user 0m1.554s 00:04:01.728 sys 0m2.173s 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.728 18:18:47 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:01.728 ************************************ 00:04:01.729 END TEST custom_alloc 00:04:01.729 ************************************ 00:04:01.988 18:18:47 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:01.988 18:18:47 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:01.988 18:18:47 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.988 18:18:47 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.988 18:18:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:01.988 ************************************ 00:04:01.988 START TEST no_shrink_alloc 00:04:01.988 ************************************ 00:04:01.988 18:18:47 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:01.988 18:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:01.988 18:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:01.988 18:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:01.988 18:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:01.988 18:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:01.988 18:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:01.988 18:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.988 18:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:01.988 18:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:01.988 18:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:01.988 18:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.988 18:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:01.988 18:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:01.988 18:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.988 18:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.988 18:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:01.988 18:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:01.988 18:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:01.988 18:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:01.988 18:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:01.988 18:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.988 18:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:04.520 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:04:05.091 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:05.091 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:05.091 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:05.091 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:05.091 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:05.091 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:05.091 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:05.091 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:05.092 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:05.092 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:05.092 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:05.092 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:05.092 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:05.092 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:05.092 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:05.092 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:05.092 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292904 kB' 'MemFree: 76201552 kB' 'MemAvailable: 80883952 kB' 'Buffers: 16508 kB' 'Cached: 10831116 kB' 'SwapCached: 0 kB' 'Active: 6793728 kB' 'Inactive: 4612004 kB' 'Active(anon): 6404980 kB' 'Inactive(anon): 0 kB' 'Active(file): 388748 kB' 'Inactive(file): 4612004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560700 kB' 'Mapped: 166484 kB' 'Shmem: 5846872 kB' 'KReclaimable: 517840 kB' 'Slab: 1034024 kB' 'SReclaimable: 517840 kB' 'SUnreclaim: 516184 kB' 'KernelStack: 19568 kB' 'PageTables: 8736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486480 kB' 'Committed_AS: 7866048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211268 kB' 'VmallocChunk: 0 kB' 'Percpu: 107904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1403860 kB' 'DirectMap2M: 18198528 kB' 'DirectMap1G: 81788928 kB' 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.092 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292904 kB' 'MemFree: 76203916 kB' 'MemAvailable: 80886316 kB' 'Buffers: 16508 kB' 'Cached: 10831116 kB' 'SwapCached: 0 kB' 'Active: 6793032 kB' 'Inactive: 4612004 kB' 'Active(anon): 6404284 kB' 'Inactive(anon): 0 kB' 'Active(file): 388748 kB' 'Inactive(file): 4612004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560636 kB' 'Mapped: 166476 kB' 'Shmem: 5846872 kB' 'KReclaimable: 517840 kB' 'Slab: 1034184 kB' 'SReclaimable: 517840 kB' 'SUnreclaim: 516344 kB' 'KernelStack: 19424 kB' 'PageTables: 8392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486480 kB' 'Committed_AS: 7866064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211252 kB' 'VmallocChunk: 0 kB' 'Percpu: 107904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1403860 kB' 'DirectMap2M: 18198528 kB' 'DirectMap1G: 81788928 kB' 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.093 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.094 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.095 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.095 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.095 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.095 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.095 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.095 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.095 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.095 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.095 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.095 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.095 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.095 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.095 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.095 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292904 kB' 'MemFree: 76206004 kB' 'MemAvailable: 80888404 kB' 'Buffers: 16508 kB' 'Cached: 10831140 kB' 'SwapCached: 0 kB' 'Active: 6793000 kB' 'Inactive: 4612004 kB' 'Active(anon): 6404252 kB' 'Inactive(anon): 0 kB' 'Active(file): 388748 kB' 'Inactive(file): 4612004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560112 kB' 'Mapped: 166400 kB' 'Shmem: 5846896 kB' 'KReclaimable: 517840 kB' 'Slab: 1034096 kB' 'SReclaimable: 517840 kB' 'SUnreclaim: 516256 kB' 'KernelStack: 19376 kB' 'PageTables: 8508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486480 kB' 'Committed_AS: 7863248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211156 kB' 'VmallocChunk: 0 kB' 'Percpu: 107904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1403860 kB' 'DirectMap2M: 18198528 kB' 'DirectMap1G: 81788928 kB' 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.361 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:05.362 nr_hugepages=1024 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.362 resv_hugepages=0 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.362 surplus_hugepages=0 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.362 anon_hugepages=0 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292904 kB' 'MemFree: 76205880 kB' 'MemAvailable: 80888280 kB' 'Buffers: 16508 kB' 'Cached: 10831180 kB' 'SwapCached: 0 kB' 'Active: 6792144 kB' 'Inactive: 4612004 kB' 'Active(anon): 6403396 kB' 'Inactive(anon): 0 kB' 'Active(file): 388748 kB' 'Inactive(file): 4612004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559744 kB' 'Mapped: 166380 kB' 'Shmem: 5846936 kB' 'KReclaimable: 517840 kB' 'Slab: 1034096 kB' 'SReclaimable: 517840 kB' 'SUnreclaim: 516256 kB' 'KernelStack: 19312 kB' 'PageTables: 8188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486480 kB' 'Committed_AS: 7863268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211140 kB' 'VmallocChunk: 0 kB' 'Percpu: 107904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1403860 kB' 'DirectMap2M: 18198528 kB' 'DirectMap1G: 81788928 kB' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.362 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48118524 kB' 'MemFree: 36265404 kB' 'MemUsed: 11853120 kB' 'SwapCached: 0 kB' 'Active: 4966544 kB' 'Inactive: 3555560 kB' 'Active(anon): 4796792 kB' 'Inactive(anon): 0 kB' 'Active(file): 169752 kB' 'Inactive(file): 3555560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8255272 kB' 'Mapped: 73164 kB' 'AnonPages: 269992 kB' 'Shmem: 4529960 kB' 'KernelStack: 10360 kB' 'PageTables: 3320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 325292 kB' 'Slab: 601356 kB' 'SReclaimable: 325292 kB' 'SUnreclaim: 276064 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.363 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:05.364 node0=1024 expecting 1024 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.364 18:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:08.657 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:04:08.657 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:08.657 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:08.657 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:08.657 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:08.657 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:08.657 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:08.657 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:08.657 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:08.657 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:08.657 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:08.657 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:08.657 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:08.657 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:08.657 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:08.657 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:08.657 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:08.657 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:08.657 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292904 kB' 'MemFree: 76200428 kB' 'MemAvailable: 80882828 kB' 'Buffers: 16508 kB' 'Cached: 10831248 kB' 'SwapCached: 0 kB' 'Active: 6793864 kB' 'Inactive: 4612004 kB' 'Active(anon): 6405116 kB' 'Inactive(anon): 0 kB' 'Active(file): 388748 kB' 'Inactive(file): 4612004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560860 kB' 'Mapped: 166484 kB' 'Shmem: 5847004 kB' 'KReclaimable: 517840 kB' 'Slab: 1034216 kB' 'SReclaimable: 517840 kB' 'SUnreclaim: 516376 kB' 'KernelStack: 19312 kB' 'PageTables: 8260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486480 kB' 'Committed_AS: 7864048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211076 kB' 'VmallocChunk: 0 kB' 'Percpu: 107904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1403860 kB' 'DirectMap2M: 18198528 kB' 'DirectMap1G: 81788928 kB' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.657 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292904 kB' 'MemFree: 76201060 kB' 'MemAvailable: 80883460 kB' 'Buffers: 16508 kB' 'Cached: 10831252 kB' 'SwapCached: 0 kB' 'Active: 6793484 kB' 'Inactive: 4612004 kB' 'Active(anon): 6404736 kB' 'Inactive(anon): 0 kB' 'Active(file): 388748 kB' 'Inactive(file): 4612004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560512 kB' 'Mapped: 166484 kB' 'Shmem: 5847008 kB' 'KReclaimable: 517840 kB' 'Slab: 1034212 kB' 'SReclaimable: 517840 kB' 'SUnreclaim: 516372 kB' 'KernelStack: 19296 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486480 kB' 'Committed_AS: 7864068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211044 kB' 'VmallocChunk: 0 kB' 'Percpu: 107904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1403860 kB' 'DirectMap2M: 18198528 kB' 'DirectMap1G: 81788928 kB' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292904 kB' 'MemFree: 76201372 kB' 'MemAvailable: 80883772 kB' 'Buffers: 16508 kB' 'Cached: 10831252 kB' 'SwapCached: 0 kB' 'Active: 6793032 kB' 'Inactive: 4612004 kB' 'Active(anon): 6404284 kB' 'Inactive(anon): 0 kB' 'Active(file): 388748 kB' 'Inactive(file): 4612004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560568 kB' 'Mapped: 166384 kB' 'Shmem: 5847008 kB' 'KReclaimable: 517840 kB' 'Slab: 1034196 kB' 'SReclaimable: 517840 kB' 'SUnreclaim: 516356 kB' 'KernelStack: 19328 kB' 'PageTables: 8292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486480 kB' 'Committed_AS: 7864088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211044 kB' 'VmallocChunk: 0 kB' 'Percpu: 107904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1403860 kB' 'DirectMap2M: 18198528 kB' 'DirectMap1G: 81788928 kB' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.658 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:08.659 nr_hugepages=1024 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:08.659 resv_hugepages=0 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:08.659 surplus_hugepages=0 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:08.659 anon_hugepages=0 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292904 kB' 'MemFree: 76201372 kB' 'MemAvailable: 80883772 kB' 'Buffers: 16508 kB' 'Cached: 10831308 kB' 'SwapCached: 0 kB' 'Active: 6792736 kB' 'Inactive: 4612004 kB' 'Active(anon): 6403988 kB' 'Inactive(anon): 0 kB' 'Active(file): 388748 kB' 'Inactive(file): 4612004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560168 kB' 'Mapped: 166384 kB' 'Shmem: 5847064 kB' 'KReclaimable: 517840 kB' 'Slab: 1034196 kB' 'SReclaimable: 517840 kB' 'SUnreclaim: 516356 kB' 'KernelStack: 19296 kB' 'PageTables: 8196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486480 kB' 'Committed_AS: 7864112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211044 kB' 'VmallocChunk: 0 kB' 'Percpu: 107904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1403860 kB' 'DirectMap2M: 18198528 kB' 'DirectMap1G: 81788928 kB' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.659 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.920 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48118524 kB' 'MemFree: 36254960 kB' 'MemUsed: 11863564 kB' 'SwapCached: 0 kB' 'Active: 4967840 kB' 'Inactive: 3555560 kB' 'Active(anon): 4798088 kB' 'Inactive(anon): 0 kB' 'Active(file): 169752 kB' 'Inactive(file): 3555560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8255392 kB' 'Mapped: 73168 kB' 'AnonPages: 271220 kB' 'Shmem: 4530080 kB' 'KernelStack: 10392 kB' 'PageTables: 3420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 325292 kB' 'Slab: 601592 kB' 'SReclaimable: 325292 kB' 'SUnreclaim: 276300 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:08.921 node0=1024 expecting 1024 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:08.921 00:04:08.921 real 0m6.949s 00:04:08.921 user 0m2.853s 00:04:08.921 sys 0m4.229s 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.921 18:18:54 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:08.921 ************************************ 00:04:08.921 END TEST no_shrink_alloc 00:04:08.921 ************************************ 00:04:08.921 18:18:54 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:08.921 18:18:54 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:08.921 18:18:54 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:08.921 18:18:54 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:08.921 18:18:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.921 18:18:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.921 18:18:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.921 18:18:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.921 18:18:54 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:08.921 18:18:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.921 18:18:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.921 18:18:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.921 18:18:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.921 18:18:54 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:08.921 18:18:54 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:08.921 00:04:08.921 real 0m26.491s 00:04:08.921 user 0m10.585s 00:04:08.921 sys 0m15.563s 00:04:08.921 18:18:54 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.921 18:18:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:08.921 ************************************ 00:04:08.921 END TEST hugepages 00:04:08.921 ************************************ 00:04:08.921 18:18:54 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:08.921 18:18:54 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:04:08.921 18:18:54 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.921 18:18:54 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.921 18:18:54 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:08.921 ************************************ 00:04:08.921 START TEST driver 00:04:08.921 ************************************ 00:04:08.922 18:18:54 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:04:08.922 * Looking for test storage... 00:04:09.180 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:04:09.181 18:18:54 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:09.181 18:18:54 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:09.181 18:18:54 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:13.388 18:18:58 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:13.388 18:18:58 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.388 18:18:58 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.388 18:18:58 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:13.645 ************************************ 00:04:13.645 START TEST guess_driver 00:04:13.645 ************************************ 00:04:13.645 18:18:58 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:13.645 18:18:58 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:13.645 18:18:58 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:13.645 18:18:58 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:13.645 18:18:58 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:13.645 18:18:58 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:13.645 18:18:58 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:13.646 18:18:58 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:13.646 18:18:58 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:13.646 18:18:58 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:13.646 18:18:58 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 220 > 0 )) 00:04:13.646 18:18:58 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:13.646 18:18:58 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:13.646 18:18:58 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:13.646 18:18:58 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:13.646 18:18:58 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:13.646 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:13.646 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:13.646 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:13.646 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:13.646 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:13.646 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:13.646 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:13.646 18:18:58 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:13.646 18:18:58 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:13.646 18:18:58 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:13.646 18:18:58 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:13.646 18:18:58 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:13.646 Looking for driver=vfio-pci 00:04:13.646 18:18:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.646 18:18:58 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:13.646 18:18:58 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.646 18:18:58 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:16.365 18:19:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ denied == \-\> ]] 00:04:16.365 18:19:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:16.365 18:19:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.933 18:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.878 18:19:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.878 18:19:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.878 18:19:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.878 18:19:03 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:17.878 18:19:03 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:17.878 18:19:03 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:17.878 18:19:03 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:23.159 00:04:23.159 real 0m9.114s 00:04:23.159 user 0m2.802s 00:04:23.159 sys 0m4.709s 00:04:23.159 18:19:08 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.159 18:19:08 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:23.159 ************************************ 00:04:23.159 END TEST guess_driver 00:04:23.159 ************************************ 00:04:23.159 18:19:08 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:23.159 00:04:23.159 real 0m13.710s 00:04:23.159 user 0m4.155s 00:04:23.159 sys 0m7.143s 00:04:23.159 18:19:08 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.159 18:19:08 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:23.159 ************************************ 00:04:23.159 END TEST driver 00:04:23.159 ************************************ 00:04:23.159 18:19:08 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:23.159 18:19:08 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:04:23.159 18:19:08 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.159 18:19:08 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.159 18:19:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:23.159 ************************************ 00:04:23.159 START TEST devices 00:04:23.159 ************************************ 00:04:23.159 18:19:08 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:04:23.159 * Looking for test storage... 00:04:23.159 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:04:23.159 18:19:08 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:23.159 18:19:08 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:23.159 18:19:08 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:23.159 18:19:08 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:26.470 18:19:11 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:26.470 18:19:11 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:26.470 18:19:11 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:26.470 18:19:11 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:26.470 18:19:11 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:26.470 18:19:11 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:26.470 18:19:11 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:26.470 18:19:11 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:26.470 18:19:11 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:26.470 18:19:11 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:26.470 18:19:11 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:04:26.470 18:19:11 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:04:26.470 18:19:11 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:26.470 18:19:11 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ host-managed != none ]] 00:04:26.470 18:19:11 setup.sh.devices -- common/autotest_common.sh@1674 -- # zoned_devs["${nvme##*/}"]=0000:5f:00.0 00:04:26.470 18:19:11 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:26.470 18:19:11 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:26.470 18:19:11 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:26.470 18:19:11 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:26.470 18:19:11 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:26.470 18:19:11 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:26.470 18:19:11 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:26.470 18:19:11 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:26.470 18:19:11 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:26.470 18:19:11 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:26.470 18:19:11 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:26.470 18:19:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:26.470 18:19:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:26.470 18:19:11 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5f:00.0 00:04:26.470 18:19:11 setup.sh.devices -- setup/devices.sh@203 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:04:26.470 18:19:11 setup.sh.devices -- setup/devices.sh@203 -- # continue 00:04:26.470 18:19:11 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:26.470 18:19:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:26.470 18:19:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:26.470 18:19:11 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5f:00.0 00:04:26.470 18:19:11 setup.sh.devices -- setup/devices.sh@203 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:04:26.470 18:19:11 setup.sh.devices -- setup/devices.sh@203 -- # continue 00:04:26.470 18:19:11 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:26.470 18:19:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:26.470 18:19:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:26.470 18:19:11 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:04:26.470 18:19:11 setup.sh.devices -- setup/devices.sh@203 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:04:26.470 18:19:11 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:26.470 18:19:11 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:26.470 18:19:11 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:04:26.470 No valid GPT data, bailing 00:04:26.470 18:19:11 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:26.470 18:19:11 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:26.470 18:19:11 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:26.470 18:19:11 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:26.470 18:19:11 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:26.470 18:19:11 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:26.470 18:19:11 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:26.470 18:19:11 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:26.470 18:19:11 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:26.470 18:19:11 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:04:26.470 18:19:11 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:26.470 18:19:11 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme1n1 00:04:26.470 18:19:11 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:26.470 18:19:11 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.470 18:19:11 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.470 18:19:11 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:26.470 ************************************ 00:04:26.470 START TEST nvme_mount 00:04:26.470 ************************************ 00:04:26.470 18:19:11 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:26.470 18:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme1n1 00:04:26.470 18:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme1n1p1 00:04:26.470 18:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.470 18:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:26.470 18:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme1n1 1 00:04:26.470 18:19:11 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:26.470 18:19:11 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:26.470 18:19:11 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:26.470 18:19:11 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:26.470 18:19:11 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:26.470 18:19:11 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:26.470 18:19:11 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:26.470 18:19:11 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:26.470 18:19:11 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:26.470 18:19:11 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:26.470 18:19:11 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:26.470 18:19:11 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:26.470 18:19:11 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:26.470 18:19:11 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 00:04:27.864 Creating new GPT entries in memory. 00:04:27.864 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:27.864 other utilities. 00:04:27.864 18:19:12 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:27.864 18:19:12 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:27.864 18:19:12 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:27.864 18:19:12 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:27.864 18:19:12 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:2099199 00:04:28.803 Creating new GPT entries in memory. 00:04:28.803 The operation has completed successfully. 00:04:28.803 18:19:14 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:28.803 18:19:14 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:28.803 18:19:14 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2689368 00:04:28.803 18:19:14 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme1n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.803 18:19:14 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme1n1p1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:28.803 18:19:14 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.803 18:19:14 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1p1 ]] 00:04:28.803 18:19:14 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1p1 00:04:28.804 18:19:14 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme1n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.804 18:19:14 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme1n1:nvme1n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:28.804 18:19:14 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:28.804 18:19:14 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1p1 00:04:28.804 18:19:14 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.804 18:19:14 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:28.804 18:19:14 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:28.804 18:19:14 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:28.804 18:19:14 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:28.804 18:19:14 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:28.804 18:19:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.804 18:19:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:28.804 18:19:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:28.804 18:19:14 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.804 18:19:14 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:31.337 18:19:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.337 18:19:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1\p\1* ]] 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:31.905 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:31.905 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:32.164 /dev/nvme1n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:32.164 /dev/nvme1n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:32.164 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:32.164 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:32.164 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme1n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:32.164 18:19:17 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme1n1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:32.164 18:19:17 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.164 18:19:17 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1 ]] 00:04:32.164 18:19:17 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1 1024M 00:04:32.164 18:19:17 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme1n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.164 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme1n1:nvme1n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:32.164 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:32.164 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1 00:04:32.164 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.164 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:32.164 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:32.164 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:32.164 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:32.164 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:32.164 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.164 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:32.164 18:19:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:32.164 18:19:17 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.164 18:19:17 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1* ]] 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.450 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.451 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.451 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.451 18:19:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.710 18:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:35.710 18:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:35.710 18:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.710 18:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:35.710 18:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.710 18:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.710 18:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme1n1 '' '' 00:04:35.710 18:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:35.710 18:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme1n1 00:04:35.710 18:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:35.710 18:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:35.710 18:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:35.710 18:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:35.710 18:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:35.710 18:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.710 18:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:35.710 18:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:35.710 18:19:21 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.710 18:19:21 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:39.001 18:19:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.001 18:19:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\1\n\1* ]] 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.001 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:39.002 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:39.002 18:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:39.002 /dev/nvme1n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:39.002 00:04:39.002 real 0m12.501s 00:04:39.002 user 0m3.867s 00:04:39.002 sys 0m6.470s 00:04:39.002 18:19:24 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.002 18:19:24 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:39.002 ************************************ 00:04:39.002 END TEST nvme_mount 00:04:39.002 ************************************ 00:04:39.002 18:19:24 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:39.002 18:19:24 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:39.002 18:19:24 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.002 18:19:24 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.002 18:19:24 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:39.002 ************************************ 00:04:39.002 START TEST dm_mount 00:04:39.002 ************************************ 00:04:39.002 18:19:24 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:39.002 18:19:24 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme1n1 00:04:39.002 18:19:24 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme1n1p1 00:04:39.002 18:19:24 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme1n1p2 00:04:39.002 18:19:24 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme1n1 00:04:39.002 18:19:24 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:39.002 18:19:24 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:39.002 18:19:24 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:39.002 18:19:24 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:39.002 18:19:24 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:39.002 18:19:24 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:39.002 18:19:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:39.002 18:19:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.002 18:19:24 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:39.002 18:19:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:39.002 18:19:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.002 18:19:24 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:39.002 18:19:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:39.002 18:19:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.002 18:19:24 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:39.002 18:19:24 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:39.002 18:19:24 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 nvme1n1p2 00:04:40.376 Creating new GPT entries in memory. 00:04:40.376 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:40.376 other utilities. 00:04:40.376 18:19:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:40.376 18:19:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:40.376 18:19:25 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:40.376 18:19:25 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:40.377 18:19:25 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:2099199 00:04:41.313 Creating new GPT entries in memory. 00:04:41.313 The operation has completed successfully. 00:04:41.313 18:19:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:41.313 18:19:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.313 18:19:26 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:41.313 18:19:26 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:41.313 18:19:26 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=2:2099200:4196351 00:04:42.251 The operation has completed successfully. 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2694043 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme1n1p1/holders/dm-2 ]] 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme1n1p2/holders/dm-2 ]] 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount size= 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme1n1:nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme_dm_test 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.251 18:19:27 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-2,holder@nvme1n1p2:dm-2,mount@nvme1n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.541 18:19:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.541 18:19:31 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:45.541 18:19:31 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:45.541 18:19:31 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:45.541 18:19:31 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:45.541 18:19:31 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:45.541 18:19:31 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:45.801 18:19:31 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme1n1p1:dm-2,holder@nvme1n1p2:dm-2 '' '' 00:04:45.801 18:19:31 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:45.801 18:19:31 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme1n1p1:dm-2,holder@nvme1n1p2:dm-2 00:04:45.801 18:19:31 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:45.801 18:19:31 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:45.801 18:19:31 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:45.801 18:19:31 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:45.801 18:19:31 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:45.801 18:19:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.801 18:19:31 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:45.801 18:19:31 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:45.801 18:19:31 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.802 18:19:31 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:49.090 18:19:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.090 18:19:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-2,holder@nvme1n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\2\:\d\m\-\2* ]] 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme1n1p1 00:04:49.090 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:49.090 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:49.091 18:19:34 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme1n1p2 00:04:49.091 00:04:49.091 real 0m9.985s 00:04:49.091 user 0m2.578s 00:04:49.091 sys 0m4.448s 00:04:49.091 18:19:34 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.091 18:19:34 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:49.091 ************************************ 00:04:49.091 END TEST dm_mount 00:04:49.091 ************************************ 00:04:49.091 18:19:34 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:49.091 18:19:34 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:49.091 18:19:34 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:49.091 18:19:34 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.091 18:19:34 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:49.091 18:19:34 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:49.091 18:19:34 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:49.091 18:19:34 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:49.349 /dev/nvme1n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:49.349 /dev/nvme1n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:49.349 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:49.349 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:49.349 18:19:34 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:49.350 18:19:34 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:49.350 18:19:34 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:49.350 18:19:34 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:49.350 18:19:34 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:49.350 18:19:34 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme1n1 ]] 00:04:49.350 18:19:34 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme1n1 00:04:49.350 00:04:49.350 real 0m26.689s 00:04:49.350 user 0m7.992s 00:04:49.350 sys 0m13.442s 00:04:49.350 18:19:34 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.350 18:19:34 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:49.350 ************************************ 00:04:49.350 END TEST devices 00:04:49.350 ************************************ 00:04:49.350 18:19:34 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:49.350 00:04:49.350 real 1m31.021s 00:04:49.350 user 0m31.221s 00:04:49.350 sys 0m50.423s 00:04:49.350 18:19:34 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.350 18:19:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:49.350 ************************************ 00:04:49.350 END TEST setup.sh 00:04:49.350 ************************************ 00:04:49.609 18:19:34 -- common/autotest_common.sh@1142 -- # return 0 00:04:49.609 18:19:34 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:04:52.957 Hugepages 00:04:52.957 node hugesize free / total 00:04:52.957 node0 1048576kB 0 / 0 00:04:52.957 node0 2048kB 1024 / 1024 00:04:52.957 node1 1048576kB 0 / 0 00:04:52.957 node1 2048kB 1024 / 1024 00:04:52.957 00:04:52.957 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:52.957 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:52.957 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:52.957 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:52.957 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:52.957 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:52.957 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:52.957 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:52.957 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:52.957 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme1 nvme1n1 00:04:52.957 NVMe 0000:5f:00.0 1b96 2600 0 nvme nvme0 nvme0n1 nvme0n2 00:04:52.957 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:52.957 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:52.957 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:52.957 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:52.957 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:52.957 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:52.957 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:52.957 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:52.957 18:19:38 -- spdk/autotest.sh@130 -- # uname -s 00:04:52.957 18:19:38 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:52.957 18:19:38 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:52.957 18:19:38 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:56.246 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:04:56.505 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:56.505 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:56.505 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:56.505 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:56.505 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:56.505 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:56.505 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:56.505 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:56.505 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:56.505 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:56.505 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:56.505 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:56.505 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:56.505 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:56.505 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:56.505 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:57.442 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:57.700 18:19:43 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:58.635 18:19:44 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:58.635 18:19:44 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:58.635 18:19:44 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:58.635 18:19:44 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:58.635 18:19:44 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:58.635 18:19:44 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:58.635 18:19:44 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:58.635 18:19:44 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:58.635 18:19:44 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:58.635 18:19:44 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:58.635 18:19:44 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:58.635 18:19:44 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:05:01.945 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:05:01.945 Waiting for block devices as requested 00:05:01.945 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:05:01.945 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:01.945 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:02.204 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:02.204 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:02.204 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:02.462 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:02.462 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:02.462 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:02.462 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:02.720 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:02.720 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:02.720 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:02.978 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:02.978 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:02.978 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:02.978 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:03.237 18:19:48 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:03.237 18:19:48 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:05:03.237 18:19:48 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:03.237 18:19:48 -- common/autotest_common.sh@1502 -- # grep 0000:5e:00.0/nvme/nvme 00:05:03.237 18:19:48 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme1 00:05:03.237 18:19:48 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme1 ]] 00:05:03.237 18:19:48 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme1 00:05:03.237 18:19:48 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:05:03.237 18:19:48 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:05:03.237 18:19:48 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:05:03.237 18:19:48 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:05:03.237 18:19:48 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:03.237 18:19:48 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:03.237 18:19:48 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:05:03.237 18:19:48 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:03.237 18:19:48 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:03.237 18:19:48 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:05:03.237 18:19:48 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:03.237 18:19:48 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:03.237 18:19:48 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:03.237 18:19:48 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:03.237 18:19:48 -- common/autotest_common.sh@1557 -- # continue 00:05:03.237 18:19:48 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:03.237 18:19:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:03.237 18:19:48 -- common/autotest_common.sh@10 -- # set +x 00:05:03.237 18:19:48 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:03.237 18:19:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:03.237 18:19:48 -- common/autotest_common.sh@10 -- # set +x 00:05:03.237 18:19:48 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:05:06.546 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:05:06.546 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:06.546 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:06.546 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:06.546 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:06.546 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:06.546 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:06.546 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:06.546 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:06.546 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:06.546 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:06.546 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:06.546 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:06.546 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:06.546 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:06.804 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:06.805 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:07.739 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:07.739 18:19:53 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:07.739 18:19:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:07.739 18:19:53 -- common/autotest_common.sh@10 -- # set +x 00:05:07.739 18:19:53 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:07.739 18:19:53 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:07.739 18:19:53 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:07.739 18:19:53 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:07.739 18:19:53 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:07.739 18:19:53 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:07.739 18:19:53 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:07.739 18:19:53 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:07.739 18:19:53 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:07.739 18:19:53 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:07.739 18:19:53 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:07.739 18:19:53 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:07.739 18:19:53 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:05:07.739 18:19:53 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:07.739 18:19:53 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:05:07.739 18:19:53 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:07.739 18:19:53 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:07.739 18:19:53 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:07.739 18:19:53 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5e:00.0 00:05:07.739 18:19:53 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5e:00.0 ]] 00:05:07.739 18:19:53 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2704046 00:05:07.739 18:19:53 -- common/autotest_common.sh@1598 -- # waitforlisten 2704046 00:05:07.739 18:19:53 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:07.739 18:19:53 -- common/autotest_common.sh@829 -- # '[' -z 2704046 ']' 00:05:07.739 18:19:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.739 18:19:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.739 18:19:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.739 18:19:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.739 18:19:53 -- common/autotest_common.sh@10 -- # set +x 00:05:07.739 [2024-07-15 18:19:53.263976] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:07.739 [2024-07-15 18:19:53.264042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2704046 ] 00:05:07.997 [2024-07-15 18:19:53.356709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.997 [2024-07-15 18:19:53.452097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.932 18:19:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:08.932 18:19:54 -- common/autotest_common.sh@862 -- # return 0 00:05:08.932 18:19:54 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:08.932 18:19:54 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:08.932 18:19:54 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:05:12.218 nvme0n1 00:05:12.218 18:19:57 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:12.218 [2024-07-15 18:19:57.548219] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:12.218 request: 00:05:12.218 { 00:05:12.218 "nvme_ctrlr_name": "nvme0", 00:05:12.218 "password": "test", 00:05:12.218 "method": "bdev_nvme_opal_revert", 00:05:12.218 "req_id": 1 00:05:12.218 } 00:05:12.218 Got JSON-RPC error response 00:05:12.218 response: 00:05:12.218 { 00:05:12.218 "code": -32602, 00:05:12.218 "message": "Invalid parameters" 00:05:12.218 } 00:05:12.218 18:19:57 -- common/autotest_common.sh@1604 -- # true 00:05:12.218 18:19:57 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:12.218 18:19:57 -- common/autotest_common.sh@1608 -- # killprocess 2704046 00:05:12.218 18:19:57 -- common/autotest_common.sh@948 -- # '[' -z 2704046 ']' 00:05:12.218 18:19:57 -- common/autotest_common.sh@952 -- # kill -0 2704046 00:05:12.218 18:19:57 -- common/autotest_common.sh@953 -- # uname 00:05:12.218 18:19:57 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.218 18:19:57 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2704046 00:05:12.218 18:19:57 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:12.218 18:19:57 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:12.218 18:19:57 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2704046' 00:05:12.218 killing process with pid 2704046 00:05:12.218 18:19:57 -- common/autotest_common.sh@967 -- # kill 2704046 00:05:12.218 18:19:57 -- common/autotest_common.sh@972 -- # wait 2704046 00:05:14.150 18:19:59 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:14.150 18:19:59 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:14.150 18:19:59 -- spdk/autotest.sh@155 -- # [[ 1 -eq 1 ]] 00:05:14.150 18:19:59 -- spdk/autotest.sh@156 -- # [[ 0 -eq 1 ]] 00:05:14.150 18:19:59 -- spdk/autotest.sh@159 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/qat_setup.sh 00:05:14.409 Restarting all devices. 00:05:18.602 lstat() error: No such file or directory 00:05:18.602 QAT Error: No GENERAL section found 00:05:18.602 Failed to configure qat_dev0 00:05:18.602 lstat() error: No such file or directory 00:05:18.602 QAT Error: No GENERAL section found 00:05:18.602 Failed to configure qat_dev1 00:05:18.602 lstat() error: No such file or directory 00:05:18.602 QAT Error: No GENERAL section found 00:05:18.602 Failed to configure qat_dev2 00:05:18.602 enable sriov 00:05:18.602 Checking status of all devices. 00:05:18.602 There is 3 QAT acceleration device(s) in the system: 00:05:18.602 qat_dev0 - type: c6xx, inst_id: 0, node_id: 0, bsf: 0000:1a:00.0, #accel: 5 #engines: 10 state: down 00:05:18.602 qat_dev1 - type: c6xx, inst_id: 1, node_id: 0, bsf: 0000:1c:00.0, #accel: 5 #engines: 10 state: down 00:05:18.602 qat_dev2 - type: c6xx, inst_id: 2, node_id: 0, bsf: 0000:1e:00.0, #accel: 5 #engines: 10 state: down 00:05:18.602 0000:1a:00.0 set to 16 VFs 00:05:19.539 0000:1c:00.0 set to 16 VFs 00:05:20.487 0000:1e:00.0 set to 16 VFs 00:05:21.863 Properly configured the qat device with driver uio_pci_generic. 00:05:21.863 18:20:07 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:21.863 18:20:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:21.863 18:20:07 -- common/autotest_common.sh@10 -- # set +x 00:05:21.863 18:20:07 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:21.863 18:20:07 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:05:21.863 18:20:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.863 18:20:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.863 18:20:07 -- common/autotest_common.sh@10 -- # set +x 00:05:21.863 ************************************ 00:05:21.863 START TEST env 00:05:21.863 ************************************ 00:05:21.863 18:20:07 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:05:21.863 * Looking for test storage... 00:05:21.863 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env 00:05:21.863 18:20:07 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:05:21.863 18:20:07 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.863 18:20:07 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.863 18:20:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:21.863 ************************************ 00:05:21.863 START TEST env_memory 00:05:21.863 ************************************ 00:05:21.863 18:20:07 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:05:21.863 00:05:21.863 00:05:21.863 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.863 http://cunit.sourceforge.net/ 00:05:21.863 00:05:21.863 00:05:21.863 Suite: memory 00:05:21.863 Test: alloc and free memory map ...[2024-07-15 18:20:07.283474] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:21.863 passed 00:05:21.863 Test: mem map translation ...[2024-07-15 18:20:07.313967] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:21.863 [2024-07-15 18:20:07.313988] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:21.863 [2024-07-15 18:20:07.314046] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:21.863 [2024-07-15 18:20:07.314058] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:21.863 passed 00:05:21.863 Test: mem map registration ...[2024-07-15 18:20:07.376609] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:21.863 [2024-07-15 18:20:07.376628] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:21.863 passed 00:05:22.123 Test: mem map adjacent registrations ...passed 00:05:22.124 00:05:22.124 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.124 suites 1 1 n/a 0 0 00:05:22.124 tests 4 4 4 0 0 00:05:22.124 asserts 152 152 152 0 n/a 00:05:22.124 00:05:22.124 Elapsed time = 0.214 seconds 00:05:22.124 00:05:22.124 real 0m0.226s 00:05:22.124 user 0m0.215s 00:05:22.124 sys 0m0.010s 00:05:22.124 18:20:07 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.124 18:20:07 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:22.124 ************************************ 00:05:22.124 END TEST env_memory 00:05:22.124 ************************************ 00:05:22.124 18:20:07 env -- common/autotest_common.sh@1142 -- # return 0 00:05:22.124 18:20:07 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:22.124 18:20:07 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.124 18:20:07 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.124 18:20:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:22.124 ************************************ 00:05:22.124 START TEST env_vtophys 00:05:22.124 ************************************ 00:05:22.124 18:20:07 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:22.124 EAL: lib.eal log level changed from notice to debug 00:05:22.124 EAL: Detected lcore 0 as core 0 on socket 0 00:05:22.124 EAL: Detected lcore 1 as core 1 on socket 0 00:05:22.124 EAL: Detected lcore 2 as core 2 on socket 0 00:05:22.124 EAL: Detected lcore 3 as core 3 on socket 0 00:05:22.124 EAL: Detected lcore 4 as core 4 on socket 0 00:05:22.124 EAL: Detected lcore 5 as core 5 on socket 0 00:05:22.124 EAL: Detected lcore 6 as core 8 on socket 0 00:05:22.124 EAL: Detected lcore 7 as core 9 on socket 0 00:05:22.124 EAL: Detected lcore 8 as core 10 on socket 0 00:05:22.124 EAL: Detected lcore 9 as core 11 on socket 0 00:05:22.124 EAL: Detected lcore 10 as core 12 on socket 0 00:05:22.124 EAL: Detected lcore 11 as core 16 on socket 0 00:05:22.124 EAL: Detected lcore 12 as core 17 on socket 0 00:05:22.124 EAL: Detected lcore 13 as core 18 on socket 0 00:05:22.124 EAL: Detected lcore 14 as core 19 on socket 0 00:05:22.124 EAL: Detected lcore 15 as core 20 on socket 0 00:05:22.124 EAL: Detected lcore 16 as core 21 on socket 0 00:05:22.124 EAL: Detected lcore 17 as core 24 on socket 0 00:05:22.124 EAL: Detected lcore 18 as core 25 on socket 0 00:05:22.124 EAL: Detected lcore 19 as core 26 on socket 0 00:05:22.124 EAL: Detected lcore 20 as core 27 on socket 0 00:05:22.124 EAL: Detected lcore 21 as core 28 on socket 0 00:05:22.124 EAL: Detected lcore 22 as core 0 on socket 1 00:05:22.124 EAL: Detected lcore 23 as core 1 on socket 1 00:05:22.124 EAL: Detected lcore 24 as core 2 on socket 1 00:05:22.124 EAL: Detected lcore 25 as core 3 on socket 1 00:05:22.124 EAL: Detected lcore 26 as core 4 on socket 1 00:05:22.124 EAL: Detected lcore 27 as core 5 on socket 1 00:05:22.124 EAL: Detected lcore 28 as core 8 on socket 1 00:05:22.124 EAL: Detected lcore 29 as core 9 on socket 1 00:05:22.124 EAL: Detected lcore 30 as core 10 on socket 1 00:05:22.124 EAL: Detected lcore 31 as core 11 on socket 1 00:05:22.124 EAL: Detected lcore 32 as core 12 on socket 1 00:05:22.124 EAL: Detected lcore 33 as core 16 on socket 1 00:05:22.124 EAL: Detected lcore 34 as core 17 on socket 1 00:05:22.124 EAL: Detected lcore 35 as core 18 on socket 1 00:05:22.124 EAL: Detected lcore 36 as core 19 on socket 1 00:05:22.124 EAL: Detected lcore 37 as core 20 on socket 1 00:05:22.124 EAL: Detected lcore 38 as core 21 on socket 1 00:05:22.124 EAL: Detected lcore 39 as core 24 on socket 1 00:05:22.124 EAL: Detected lcore 40 as core 25 on socket 1 00:05:22.124 EAL: Detected lcore 41 as core 26 on socket 1 00:05:22.124 EAL: Detected lcore 42 as core 27 on socket 1 00:05:22.124 EAL: Detected lcore 43 as core 28 on socket 1 00:05:22.124 EAL: Detected lcore 44 as core 0 on socket 0 00:05:22.124 EAL: Detected lcore 45 as core 1 on socket 0 00:05:22.124 EAL: Detected lcore 46 as core 2 on socket 0 00:05:22.124 EAL: Detected lcore 47 as core 3 on socket 0 00:05:22.124 EAL: Detected lcore 48 as core 4 on socket 0 00:05:22.124 EAL: Detected lcore 49 as core 5 on socket 0 00:05:22.124 EAL: Detected lcore 50 as core 8 on socket 0 00:05:22.124 EAL: Detected lcore 51 as core 9 on socket 0 00:05:22.124 EAL: Detected lcore 52 as core 10 on socket 0 00:05:22.124 EAL: Detected lcore 53 as core 11 on socket 0 00:05:22.124 EAL: Detected lcore 54 as core 12 on socket 0 00:05:22.124 EAL: Detected lcore 55 as core 16 on socket 0 00:05:22.124 EAL: Detected lcore 56 as core 17 on socket 0 00:05:22.124 EAL: Detected lcore 57 as core 18 on socket 0 00:05:22.124 EAL: Detected lcore 58 as core 19 on socket 0 00:05:22.124 EAL: Detected lcore 59 as core 20 on socket 0 00:05:22.124 EAL: Detected lcore 60 as core 21 on socket 0 00:05:22.124 EAL: Detected lcore 61 as core 24 on socket 0 00:05:22.124 EAL: Detected lcore 62 as core 25 on socket 0 00:05:22.124 EAL: Detected lcore 63 as core 26 on socket 0 00:05:22.124 EAL: Detected lcore 64 as core 27 on socket 0 00:05:22.124 EAL: Detected lcore 65 as core 28 on socket 0 00:05:22.124 EAL: Detected lcore 66 as core 0 on socket 1 00:05:22.124 EAL: Detected lcore 67 as core 1 on socket 1 00:05:22.124 EAL: Detected lcore 68 as core 2 on socket 1 00:05:22.124 EAL: Detected lcore 69 as core 3 on socket 1 00:05:22.124 EAL: Detected lcore 70 as core 4 on socket 1 00:05:22.124 EAL: Detected lcore 71 as core 5 on socket 1 00:05:22.124 EAL: Detected lcore 72 as core 8 on socket 1 00:05:22.124 EAL: Detected lcore 73 as core 9 on socket 1 00:05:22.124 EAL: Detected lcore 74 as core 10 on socket 1 00:05:22.124 EAL: Detected lcore 75 as core 11 on socket 1 00:05:22.124 EAL: Detected lcore 76 as core 12 on socket 1 00:05:22.124 EAL: Detected lcore 77 as core 16 on socket 1 00:05:22.124 EAL: Detected lcore 78 as core 17 on socket 1 00:05:22.124 EAL: Detected lcore 79 as core 18 on socket 1 00:05:22.124 EAL: Detected lcore 80 as core 19 on socket 1 00:05:22.124 EAL: Detected lcore 81 as core 20 on socket 1 00:05:22.124 EAL: Detected lcore 82 as core 21 on socket 1 00:05:22.124 EAL: Detected lcore 83 as core 24 on socket 1 00:05:22.124 EAL: Detected lcore 84 as core 25 on socket 1 00:05:22.124 EAL: Detected lcore 85 as core 26 on socket 1 00:05:22.124 EAL: Detected lcore 86 as core 27 on socket 1 00:05:22.124 EAL: Detected lcore 87 as core 28 on socket 1 00:05:22.124 EAL: Maximum logical cores by configuration: 128 00:05:22.124 EAL: Detected CPU lcores: 88 00:05:22.124 EAL: Detected NUMA nodes: 2 00:05:22.124 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:22.124 EAL: Detected shared linkage of DPDK 00:05:22.124 EAL: No shared files mode enabled, IPC will be disabled 00:05:22.124 EAL: No shared files mode enabled, IPC is disabled 00:05:22.124 EAL: PCI driver qat for device 0000:1a:01.0 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1a:01.1 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1a:01.2 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1a:01.3 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1a:01.4 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1a:01.5 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1a:01.6 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1a:01.7 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1a:02.0 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1a:02.1 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1a:02.2 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1a:02.3 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1a:02.4 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1a:02.5 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1a:02.6 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1a:02.7 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1c:01.0 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1c:01.1 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1c:01.2 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1c:01.3 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1c:01.4 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1c:01.5 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1c:01.6 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1c:01.7 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1c:02.0 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1c:02.1 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1c:02.2 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1c:02.3 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1c:02.4 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1c:02.5 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1c:02.6 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1c:02.7 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1e:01.0 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1e:01.1 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1e:01.2 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1e:01.3 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1e:01.4 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1e:01.5 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1e:01.6 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1e:01.7 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1e:02.0 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1e:02.1 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1e:02.2 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1e:02.3 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1e:02.4 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1e:02.5 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1e:02.6 wants IOVA as 'PA' 00:05:22.124 EAL: PCI driver qat for device 0000:1e:02.7 wants IOVA as 'PA' 00:05:22.124 EAL: Bus pci wants IOVA as 'PA' 00:05:22.124 EAL: Bus auxiliary wants IOVA as 'DC' 00:05:22.124 EAL: Bus vdev wants IOVA as 'DC' 00:05:22.124 EAL: Selected IOVA mode 'PA' 00:05:22.124 EAL: Probing VFIO support... 00:05:22.124 EAL: IOMMU type 1 (Type 1) is supported 00:05:22.124 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:22.124 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:22.124 EAL: VFIO support initialized 00:05:22.124 EAL: Ask a virtual area of 0x2e000 bytes 00:05:22.124 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:22.124 EAL: Setting up physically contiguous memory... 00:05:22.124 EAL: Setting maximum number of open files to 524288 00:05:22.124 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:22.124 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:22.125 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:22.125 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.125 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:22.125 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.125 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.125 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:22.125 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:22.125 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.125 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:22.125 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.125 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.125 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:22.125 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:22.125 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.125 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:22.125 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.125 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.125 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:22.125 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:22.125 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.125 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:22.125 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.125 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.125 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:22.125 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:22.125 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:22.125 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.125 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:22.125 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:22.125 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.125 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:22.125 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:22.125 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.125 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:22.125 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:22.125 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.125 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:22.125 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:22.125 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.125 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:22.125 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:22.125 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.125 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:22.125 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:22.125 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.125 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:22.125 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:22.125 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.125 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:22.125 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:22.125 EAL: Hugepages will be freed exactly as allocated. 00:05:22.125 EAL: No shared files mode enabled, IPC is disabled 00:05:22.125 EAL: No shared files mode enabled, IPC is disabled 00:05:22.125 EAL: TSC frequency is ~2100000 KHz 00:05:22.125 EAL: Main lcore 0 is ready (tid=7f80127a3b00;cpuset=[0]) 00:05:22.125 EAL: Trying to obtain current memory policy. 00:05:22.125 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.125 EAL: Restoring previous memory policy: 0 00:05:22.125 EAL: request: mp_malloc_sync 00:05:22.125 EAL: No shared files mode enabled, IPC is disabled 00:05:22.125 EAL: Heap on socket 0 was expanded by 2MB 00:05:22.125 EAL: PCI device 0000:1a:01.0 on NUMA socket 0 00:05:22.125 EAL: probe driver: 8086:37c9 qat 00:05:22.125 EAL: PCI memory mapped at 0x202001000000 00:05:22.125 EAL: PCI memory mapped at 0x202001001000 00:05:22.125 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:05:22.125 EAL: PCI device 0000:1a:01.1 on NUMA socket 0 00:05:22.125 EAL: probe driver: 8086:37c9 qat 00:05:22.125 EAL: PCI memory mapped at 0x202001002000 00:05:22.125 EAL: PCI memory mapped at 0x202001003000 00:05:22.125 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:05:22.125 EAL: PCI device 0000:1a:01.2 on NUMA socket 0 00:05:22.125 EAL: probe driver: 8086:37c9 qat 00:05:22.125 EAL: PCI memory mapped at 0x202001004000 00:05:22.125 EAL: PCI memory mapped at 0x202001005000 00:05:22.125 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:05:22.125 EAL: PCI device 0000:1a:01.3 on NUMA socket 0 00:05:22.125 EAL: probe driver: 8086:37c9 qat 00:05:22.125 EAL: PCI memory mapped at 0x202001006000 00:05:22.125 EAL: PCI memory mapped at 0x202001007000 00:05:22.125 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:05:22.125 EAL: PCI device 0000:1a:01.4 on NUMA socket 0 00:05:22.125 EAL: probe driver: 8086:37c9 qat 00:05:22.125 EAL: PCI memory mapped at 0x202001008000 00:05:22.125 EAL: PCI memory mapped at 0x202001009000 00:05:22.125 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:05:22.125 EAL: PCI device 0000:1a:01.5 on NUMA socket 0 00:05:22.125 EAL: probe driver: 8086:37c9 qat 00:05:22.125 EAL: PCI memory mapped at 0x20200100a000 00:05:22.125 EAL: PCI memory mapped at 0x20200100b000 00:05:22.125 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:05:22.125 EAL: PCI device 0000:1a:01.6 on NUMA socket 0 00:05:22.125 EAL: probe driver: 8086:37c9 qat 00:05:22.125 EAL: PCI memory mapped at 0x20200100c000 00:05:22.125 EAL: PCI memory mapped at 0x20200100d000 00:05:22.125 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:05:22.125 EAL: PCI device 0000:1a:01.7 on NUMA socket 0 00:05:22.125 EAL: probe driver: 8086:37c9 qat 00:05:22.125 EAL: PCI memory mapped at 0x20200100e000 00:05:22.125 EAL: PCI memory mapped at 0x20200100f000 00:05:22.125 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:05:22.125 EAL: PCI device 0000:1a:02.0 on NUMA socket 0 00:05:22.125 EAL: probe driver: 8086:37c9 qat 00:05:22.125 EAL: PCI memory mapped at 0x202001010000 00:05:22.125 EAL: PCI memory mapped at 0x202001011000 00:05:22.125 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:05:22.125 EAL: PCI device 0000:1a:02.1 on NUMA socket 0 00:05:22.125 EAL: probe driver: 8086:37c9 qat 00:05:22.125 EAL: PCI memory mapped at 0x202001012000 00:05:22.125 EAL: PCI memory mapped at 0x202001013000 00:05:22.125 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:05:22.125 EAL: PCI device 0000:1a:02.2 on NUMA socket 0 00:05:22.125 EAL: probe driver: 8086:37c9 qat 00:05:22.125 EAL: PCI memory mapped at 0x202001014000 00:05:22.125 EAL: PCI memory mapped at 0x202001015000 00:05:22.125 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:05:22.125 EAL: PCI device 0000:1a:02.3 on NUMA socket 0 00:05:22.125 EAL: probe driver: 8086:37c9 qat 00:05:22.125 EAL: PCI memory mapped at 0x202001016000 00:05:22.125 EAL: PCI memory mapped at 0x202001017000 00:05:22.125 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:05:22.125 EAL: PCI device 0000:1a:02.4 on NUMA socket 0 00:05:22.125 EAL: probe driver: 8086:37c9 qat 00:05:22.125 EAL: PCI memory mapped at 0x202001018000 00:05:22.125 EAL: PCI memory mapped at 0x202001019000 00:05:22.125 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:05:22.125 EAL: PCI device 0000:1a:02.5 on NUMA socket 0 00:05:22.125 EAL: probe driver: 8086:37c9 qat 00:05:22.125 EAL: PCI memory mapped at 0x20200101a000 00:05:22.125 EAL: PCI memory mapped at 0x20200101b000 00:05:22.125 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:05:22.125 EAL: PCI device 0000:1a:02.6 on NUMA socket 0 00:05:22.125 EAL: probe driver: 8086:37c9 qat 00:05:22.125 EAL: PCI memory mapped at 0x20200101c000 00:05:22.125 EAL: PCI memory mapped at 0x20200101d000 00:05:22.125 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:05:22.125 EAL: PCI device 0000:1a:02.7 on NUMA socket 0 00:05:22.125 EAL: probe driver: 8086:37c9 qat 00:05:22.125 EAL: PCI memory mapped at 0x20200101e000 00:05:22.125 EAL: PCI memory mapped at 0x20200101f000 00:05:22.125 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:05:22.125 EAL: PCI device 0000:1c:01.0 on NUMA socket 0 00:05:22.125 EAL: probe driver: 8086:37c9 qat 00:05:22.125 EAL: PCI memory mapped at 0x202001020000 00:05:22.125 EAL: PCI memory mapped at 0x202001021000 00:05:22.125 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:05:22.125 EAL: PCI device 0000:1c:01.1 on NUMA socket 0 00:05:22.125 EAL: probe driver: 8086:37c9 qat 00:05:22.125 EAL: PCI memory mapped at 0x202001022000 00:05:22.125 EAL: PCI memory mapped at 0x202001023000 00:05:22.125 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:05:22.125 EAL: PCI device 0000:1c:01.2 on NUMA socket 0 00:05:22.125 EAL: probe driver: 8086:37c9 qat 00:05:22.125 EAL: PCI memory mapped at 0x202001024000 00:05:22.125 EAL: PCI memory mapped at 0x202001025000 00:05:22.125 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:05:22.125 EAL: PCI device 0000:1c:01.3 on NUMA socket 0 00:05:22.125 EAL: probe driver: 8086:37c9 qat 00:05:22.125 EAL: PCI memory mapped at 0x202001026000 00:05:22.125 EAL: PCI memory mapped at 0x202001027000 00:05:22.125 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:05:22.125 EAL: PCI device 0000:1c:01.4 on NUMA socket 0 00:05:22.125 EAL: probe driver: 8086:37c9 qat 00:05:22.125 EAL: PCI memory mapped at 0x202001028000 00:05:22.125 EAL: PCI memory mapped at 0x202001029000 00:05:22.125 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:05:22.125 EAL: PCI device 0000:1c:01.5 on NUMA socket 0 00:05:22.125 EAL: probe driver: 8086:37c9 qat 00:05:22.125 EAL: PCI memory mapped at 0x20200102a000 00:05:22.125 EAL: PCI memory mapped at 0x20200102b000 00:05:22.125 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:05:22.125 EAL: PCI device 0000:1c:01.6 on NUMA socket 0 00:05:22.125 EAL: probe driver: 8086:37c9 qat 00:05:22.125 EAL: PCI memory mapped at 0x20200102c000 00:05:22.125 EAL: PCI memory mapped at 0x20200102d000 00:05:22.125 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:05:22.125 EAL: PCI device 0000:1c:01.7 on NUMA socket 0 00:05:22.125 EAL: probe driver: 8086:37c9 qat 00:05:22.125 EAL: PCI memory mapped at 0x20200102e000 00:05:22.125 EAL: PCI memory mapped at 0x20200102f000 00:05:22.125 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:05:22.125 EAL: PCI device 0000:1c:02.0 on NUMA socket 0 00:05:22.125 EAL: probe driver: 8086:37c9 qat 00:05:22.125 EAL: PCI memory mapped at 0x202001030000 00:05:22.125 EAL: PCI memory mapped at 0x202001031000 00:05:22.125 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:05:22.125 EAL: PCI device 0000:1c:02.1 on NUMA socket 0 00:05:22.125 EAL: probe driver: 8086:37c9 qat 00:05:22.125 EAL: PCI memory mapped at 0x202001032000 00:05:22.125 EAL: PCI memory mapped at 0x202001033000 00:05:22.126 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:05:22.126 EAL: PCI device 0000:1c:02.2 on NUMA socket 0 00:05:22.126 EAL: probe driver: 8086:37c9 qat 00:05:22.126 EAL: PCI memory mapped at 0x202001034000 00:05:22.126 EAL: PCI memory mapped at 0x202001035000 00:05:22.126 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:05:22.126 EAL: PCI device 0000:1c:02.3 on NUMA socket 0 00:05:22.126 EAL: probe driver: 8086:37c9 qat 00:05:22.126 EAL: PCI memory mapped at 0x202001036000 00:05:22.126 EAL: PCI memory mapped at 0x202001037000 00:05:22.126 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:05:22.126 EAL: PCI device 0000:1c:02.4 on NUMA socket 0 00:05:22.126 EAL: probe driver: 8086:37c9 qat 00:05:22.126 EAL: PCI memory mapped at 0x202001038000 00:05:22.126 EAL: PCI memory mapped at 0x202001039000 00:05:22.126 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:05:22.126 EAL: PCI device 0000:1c:02.5 on NUMA socket 0 00:05:22.126 EAL: probe driver: 8086:37c9 qat 00:05:22.126 EAL: PCI memory mapped at 0x20200103a000 00:05:22.126 EAL: PCI memory mapped at 0x20200103b000 00:05:22.126 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:05:22.126 EAL: PCI device 0000:1c:02.6 on NUMA socket 0 00:05:22.126 EAL: probe driver: 8086:37c9 qat 00:05:22.126 EAL: PCI memory mapped at 0x20200103c000 00:05:22.126 EAL: PCI memory mapped at 0x20200103d000 00:05:22.126 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:05:22.126 EAL: PCI device 0000:1c:02.7 on NUMA socket 0 00:05:22.126 EAL: probe driver: 8086:37c9 qat 00:05:22.126 EAL: PCI memory mapped at 0x20200103e000 00:05:22.126 EAL: PCI memory mapped at 0x20200103f000 00:05:22.126 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:05:22.126 EAL: PCI device 0000:1e:01.0 on NUMA socket 0 00:05:22.126 EAL: probe driver: 8086:37c9 qat 00:05:22.126 EAL: PCI memory mapped at 0x202001040000 00:05:22.126 EAL: PCI memory mapped at 0x202001041000 00:05:22.126 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:05:22.126 EAL: PCI device 0000:1e:01.1 on NUMA socket 0 00:05:22.126 EAL: probe driver: 8086:37c9 qat 00:05:22.126 EAL: PCI memory mapped at 0x202001042000 00:05:22.126 EAL: PCI memory mapped at 0x202001043000 00:05:22.126 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:05:22.126 EAL: PCI device 0000:1e:01.2 on NUMA socket 0 00:05:22.126 EAL: probe driver: 8086:37c9 qat 00:05:22.126 EAL: PCI memory mapped at 0x202001044000 00:05:22.126 EAL: PCI memory mapped at 0x202001045000 00:05:22.126 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:05:22.126 EAL: PCI device 0000:1e:01.3 on NUMA socket 0 00:05:22.126 EAL: probe driver: 8086:37c9 qat 00:05:22.126 EAL: PCI memory mapped at 0x202001046000 00:05:22.126 EAL: PCI memory mapped at 0x202001047000 00:05:22.126 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:05:22.126 EAL: PCI device 0000:1e:01.4 on NUMA socket 0 00:05:22.126 EAL: probe driver: 8086:37c9 qat 00:05:22.126 EAL: PCI memory mapped at 0x202001048000 00:05:22.126 EAL: PCI memory mapped at 0x202001049000 00:05:22.126 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:05:22.126 EAL: PCI device 0000:1e:01.5 on NUMA socket 0 00:05:22.126 EAL: probe driver: 8086:37c9 qat 00:05:22.126 EAL: PCI memory mapped at 0x20200104a000 00:05:22.126 EAL: PCI memory mapped at 0x20200104b000 00:05:22.126 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:05:22.126 EAL: PCI device 0000:1e:01.6 on NUMA socket 0 00:05:22.126 EAL: probe driver: 8086:37c9 qat 00:05:22.126 EAL: PCI memory mapped at 0x20200104c000 00:05:22.126 EAL: PCI memory mapped at 0x20200104d000 00:05:22.126 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:05:22.126 EAL: PCI device 0000:1e:01.7 on NUMA socket 0 00:05:22.126 EAL: probe driver: 8086:37c9 qat 00:05:22.126 EAL: PCI memory mapped at 0x20200104e000 00:05:22.126 EAL: PCI memory mapped at 0x20200104f000 00:05:22.126 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:05:22.126 EAL: PCI device 0000:1e:02.0 on NUMA socket 0 00:05:22.126 EAL: probe driver: 8086:37c9 qat 00:05:22.126 EAL: PCI memory mapped at 0x202001050000 00:05:22.126 EAL: PCI memory mapped at 0x202001051000 00:05:22.126 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:05:22.126 EAL: PCI device 0000:1e:02.1 on NUMA socket 0 00:05:22.126 EAL: probe driver: 8086:37c9 qat 00:05:22.126 EAL: PCI memory mapped at 0x202001052000 00:05:22.126 EAL: PCI memory mapped at 0x202001053000 00:05:22.126 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:05:22.126 EAL: PCI device 0000:1e:02.2 on NUMA socket 0 00:05:22.126 EAL: probe driver: 8086:37c9 qat 00:05:22.126 EAL: PCI memory mapped at 0x202001054000 00:05:22.126 EAL: PCI memory mapped at 0x202001055000 00:05:22.126 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:05:22.126 EAL: PCI device 0000:1e:02.3 on NUMA socket 0 00:05:22.126 EAL: probe driver: 8086:37c9 qat 00:05:22.126 EAL: PCI memory mapped at 0x202001056000 00:05:22.126 EAL: PCI memory mapped at 0x202001057000 00:05:22.126 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:05:22.126 EAL: PCI device 0000:1e:02.4 on NUMA socket 0 00:05:22.126 EAL: probe driver: 8086:37c9 qat 00:05:22.126 EAL: PCI memory mapped at 0x202001058000 00:05:22.126 EAL: PCI memory mapped at 0x202001059000 00:05:22.126 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:05:22.126 EAL: PCI device 0000:1e:02.5 on NUMA socket 0 00:05:22.126 EAL: probe driver: 8086:37c9 qat 00:05:22.126 EAL: PCI memory mapped at 0x20200105a000 00:05:22.126 EAL: PCI memory mapped at 0x20200105b000 00:05:22.126 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:05:22.126 EAL: PCI device 0000:1e:02.6 on NUMA socket 0 00:05:22.126 EAL: probe driver: 8086:37c9 qat 00:05:22.126 EAL: PCI memory mapped at 0x20200105c000 00:05:22.126 EAL: PCI memory mapped at 0x20200105d000 00:05:22.126 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:05:22.126 EAL: PCI device 0000:1e:02.7 on NUMA socket 0 00:05:22.126 EAL: probe driver: 8086:37c9 qat 00:05:22.126 EAL: PCI memory mapped at 0x20200105e000 00:05:22.126 EAL: PCI memory mapped at 0x20200105f000 00:05:22.126 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:05:22.126 EAL: No shared files mode enabled, IPC is disabled 00:05:22.126 EAL: No shared files mode enabled, IPC is disabled 00:05:22.126 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:22.126 EAL: Mem event callback 'spdk:(nil)' registered 00:05:22.126 00:05:22.126 00:05:22.126 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.126 http://cunit.sourceforge.net/ 00:05:22.126 00:05:22.126 00:05:22.126 Suite: components_suite 00:05:22.126 Test: vtophys_malloc_test ...passed 00:05:22.126 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:22.126 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.126 EAL: Restoring previous memory policy: 4 00:05:22.126 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.126 EAL: request: mp_malloc_sync 00:05:22.126 EAL: No shared files mode enabled, IPC is disabled 00:05:22.126 EAL: Heap on socket 0 was expanded by 4MB 00:05:22.126 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.126 EAL: request: mp_malloc_sync 00:05:22.126 EAL: No shared files mode enabled, IPC is disabled 00:05:22.126 EAL: Heap on socket 0 was shrunk by 4MB 00:05:22.126 EAL: Trying to obtain current memory policy. 00:05:22.126 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.126 EAL: Restoring previous memory policy: 4 00:05:22.126 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.126 EAL: request: mp_malloc_sync 00:05:22.126 EAL: No shared files mode enabled, IPC is disabled 00:05:22.126 EAL: Heap on socket 0 was expanded by 6MB 00:05:22.126 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.126 EAL: request: mp_malloc_sync 00:05:22.126 EAL: No shared files mode enabled, IPC is disabled 00:05:22.126 EAL: Heap on socket 0 was shrunk by 6MB 00:05:22.126 EAL: Trying to obtain current memory policy. 00:05:22.126 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.126 EAL: Restoring previous memory policy: 4 00:05:22.126 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.126 EAL: request: mp_malloc_sync 00:05:22.126 EAL: No shared files mode enabled, IPC is disabled 00:05:22.126 EAL: Heap on socket 0 was expanded by 10MB 00:05:22.126 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.126 EAL: request: mp_malloc_sync 00:05:22.126 EAL: No shared files mode enabled, IPC is disabled 00:05:22.126 EAL: Heap on socket 0 was shrunk by 10MB 00:05:22.126 EAL: Trying to obtain current memory policy. 00:05:22.126 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.126 EAL: Restoring previous memory policy: 4 00:05:22.126 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.126 EAL: request: mp_malloc_sync 00:05:22.126 EAL: No shared files mode enabled, IPC is disabled 00:05:22.126 EAL: Heap on socket 0 was expanded by 18MB 00:05:22.126 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.126 EAL: request: mp_malloc_sync 00:05:22.126 EAL: No shared files mode enabled, IPC is disabled 00:05:22.126 EAL: Heap on socket 0 was shrunk by 18MB 00:05:22.126 EAL: Trying to obtain current memory policy. 00:05:22.126 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.126 EAL: Restoring previous memory policy: 4 00:05:22.126 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.126 EAL: request: mp_malloc_sync 00:05:22.126 EAL: No shared files mode enabled, IPC is disabled 00:05:22.126 EAL: Heap on socket 0 was expanded by 34MB 00:05:22.126 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.385 EAL: request: mp_malloc_sync 00:05:22.385 EAL: No shared files mode enabled, IPC is disabled 00:05:22.385 EAL: Heap on socket 0 was shrunk by 34MB 00:05:22.385 EAL: Trying to obtain current memory policy. 00:05:22.385 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.385 EAL: Restoring previous memory policy: 4 00:05:22.385 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.385 EAL: request: mp_malloc_sync 00:05:22.385 EAL: No shared files mode enabled, IPC is disabled 00:05:22.385 EAL: Heap on socket 0 was expanded by 66MB 00:05:22.385 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.385 EAL: request: mp_malloc_sync 00:05:22.385 EAL: No shared files mode enabled, IPC is disabled 00:05:22.385 EAL: Heap on socket 0 was shrunk by 66MB 00:05:22.385 EAL: Trying to obtain current memory policy. 00:05:22.385 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.385 EAL: Restoring previous memory policy: 4 00:05:22.385 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.385 EAL: request: mp_malloc_sync 00:05:22.385 EAL: No shared files mode enabled, IPC is disabled 00:05:22.385 EAL: Heap on socket 0 was expanded by 130MB 00:05:22.385 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.385 EAL: request: mp_malloc_sync 00:05:22.385 EAL: No shared files mode enabled, IPC is disabled 00:05:22.385 EAL: Heap on socket 0 was shrunk by 130MB 00:05:22.385 EAL: Trying to obtain current memory policy. 00:05:22.385 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.385 EAL: Restoring previous memory policy: 4 00:05:22.385 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.386 EAL: request: mp_malloc_sync 00:05:22.386 EAL: No shared files mode enabled, IPC is disabled 00:05:22.386 EAL: Heap on socket 0 was expanded by 258MB 00:05:22.386 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.386 EAL: request: mp_malloc_sync 00:05:22.386 EAL: No shared files mode enabled, IPC is disabled 00:05:22.386 EAL: Heap on socket 0 was shrunk by 258MB 00:05:22.386 EAL: Trying to obtain current memory policy. 00:05:22.386 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.645 EAL: Restoring previous memory policy: 4 00:05:22.645 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.645 EAL: request: mp_malloc_sync 00:05:22.645 EAL: No shared files mode enabled, IPC is disabled 00:05:22.645 EAL: Heap on socket 0 was expanded by 514MB 00:05:22.645 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.645 EAL: request: mp_malloc_sync 00:05:22.645 EAL: No shared files mode enabled, IPC is disabled 00:05:22.645 EAL: Heap on socket 0 was shrunk by 514MB 00:05:22.645 EAL: Trying to obtain current memory policy. 00:05:22.645 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.904 EAL: Restoring previous memory policy: 4 00:05:22.904 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.904 EAL: request: mp_malloc_sync 00:05:22.904 EAL: No shared files mode enabled, IPC is disabled 00:05:22.904 EAL: Heap on socket 0 was expanded by 1026MB 00:05:23.163 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.163 EAL: request: mp_malloc_sync 00:05:23.163 EAL: No shared files mode enabled, IPC is disabled 00:05:23.163 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:23.163 passed 00:05:23.163 00:05:23.163 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.163 suites 1 1 n/a 0 0 00:05:23.163 tests 2 2 2 0 0 00:05:23.163 asserts 6380 6380 6380 0 n/a 00:05:23.163 00:05:23.163 Elapsed time = 1.033 seconds 00:05:23.423 EAL: No shared files mode enabled, IPC is disabled 00:05:23.423 EAL: No shared files mode enabled, IPC is disabled 00:05:23.423 EAL: No shared files mode enabled, IPC is disabled 00:05:23.423 00:05:23.423 real 0m1.193s 00:05:23.423 user 0m0.685s 00:05:23.423 sys 0m0.481s 00:05:23.423 18:20:08 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.423 18:20:08 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:23.423 ************************************ 00:05:23.423 END TEST env_vtophys 00:05:23.423 ************************************ 00:05:23.423 18:20:08 env -- common/autotest_common.sh@1142 -- # return 0 00:05:23.423 18:20:08 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:05:23.423 18:20:08 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.423 18:20:08 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.423 18:20:08 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.423 ************************************ 00:05:23.423 START TEST env_pci 00:05:23.423 ************************************ 00:05:23.423 18:20:08 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:05:23.423 00:05:23.423 00:05:23.423 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.423 http://cunit.sourceforge.net/ 00:05:23.423 00:05:23.423 00:05:23.423 Suite: pci 00:05:23.423 Test: pci_hook ...[2024-07-15 18:20:08.808378] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2706695 has claimed it 00:05:23.423 EAL: Cannot find device (10000:00:01.0) 00:05:23.423 EAL: Failed to attach device on primary process 00:05:23.423 passed 00:05:23.423 00:05:23.423 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.423 suites 1 1 n/a 0 0 00:05:23.423 tests 1 1 1 0 0 00:05:23.423 asserts 25 25 25 0 n/a 00:05:23.423 00:05:23.423 Elapsed time = 0.030 seconds 00:05:23.423 00:05:23.423 real 0m0.055s 00:05:23.423 user 0m0.017s 00:05:23.423 sys 0m0.038s 00:05:23.423 18:20:08 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.423 18:20:08 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:23.423 ************************************ 00:05:23.423 END TEST env_pci 00:05:23.423 ************************************ 00:05:23.423 18:20:08 env -- common/autotest_common.sh@1142 -- # return 0 00:05:23.423 18:20:08 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:23.423 18:20:08 env -- env/env.sh@15 -- # uname 00:05:23.423 18:20:08 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:23.423 18:20:08 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:23.423 18:20:08 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:23.423 18:20:08 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:23.423 18:20:08 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.423 18:20:08 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.423 ************************************ 00:05:23.423 START TEST env_dpdk_post_init 00:05:23.423 ************************************ 00:05:23.423 18:20:08 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:23.423 EAL: Detected CPU lcores: 88 00:05:23.423 EAL: Detected NUMA nodes: 2 00:05:23.423 EAL: Detected shared linkage of DPDK 00:05:23.423 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:23.683 EAL: Selected IOVA mode 'PA' 00:05:23.683 EAL: VFIO support initialized 00:05:23.683 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:05:23.683 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_asym 00:05:23.683 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.683 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_sym 00:05:23.683 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.683 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:05:23.683 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_asym 00:05:23.683 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.683 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_sym 00:05:23.683 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.683 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:05:23.683 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_asym 00:05:23.683 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.683 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_sym 00:05:23.683 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_asym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_sym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_asym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_sym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_asym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_sym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_asym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_sym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_asym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_sym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_asym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_sym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_asym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_sym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_asym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_sym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_asym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_sym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_asym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_sym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_asym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_sym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_asym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_sym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_asym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_sym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_asym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_sym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_asym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_sym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_asym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_sym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_asym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_sym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_asym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_sym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_asym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_sym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_asym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_sym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_asym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_sym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_asym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_sym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_asym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_sym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_asym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_sym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_asym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_sym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_asym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_sym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_asym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_sym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_asym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_sym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_asym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_sym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:05:23.684 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_asym 00:05:23.684 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.685 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_sym 00:05:23.685 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:05:23.685 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_asym 00:05:23.685 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.685 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_sym 00:05:23.685 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:05:23.685 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_asym 00:05:23.685 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.685 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_sym 00:05:23.685 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:05:23.685 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_asym 00:05:23.685 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.685 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_sym 00:05:23.685 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:05:23.685 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_asym 00:05:23.685 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.685 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_sym 00:05:23.685 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:05:23.685 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_asym 00:05:23.685 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.685 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_sym 00:05:23.685 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:05:23.685 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_asym 00:05:23.685 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.685 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_sym 00:05:23.685 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:05:23.685 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_asym 00:05:23.685 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.685 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_sym 00:05:23.685 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:05:23.685 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_asym 00:05:23.685 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.685 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_sym 00:05:23.685 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:05:23.685 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_asym 00:05:23.685 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.685 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_sym 00:05:23.685 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:05:23.685 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_asym 00:05:23.685 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.685 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_sym 00:05:23.685 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:05:23.685 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_asym 00:05:23.685 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.685 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_sym 00:05:23.685 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:05:23.685 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_asym 00:05:23.685 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.685 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_sym 00:05:23.685 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:05:23.685 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_asym 00:05:23.685 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.685 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_sym 00:05:23.685 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:05:23.685 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_asym 00:05:23.685 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.685 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_sym 00:05:23.685 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:05:23.685 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_asym 00:05:23.685 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.685 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_sym 00:05:23.685 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.685 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:23.685 EAL: Using IOMMU type 1 (Type 1) 00:05:23.685 EAL: Ignore mapping IO port bar(1) 00:05:23.685 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:23.685 EAL: Ignore mapping IO port bar(1) 00:05:23.685 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:23.685 EAL: Ignore mapping IO port bar(1) 00:05:23.685 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:23.685 EAL: Ignore mapping IO port bar(1) 00:05:23.685 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:23.685 EAL: Ignore mapping IO port bar(1) 00:05:23.685 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:23.685 EAL: Ignore mapping IO port bar(1) 00:05:23.685 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:23.685 EAL: Ignore mapping IO port bar(1) 00:05:23.685 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:23.685 EAL: Ignore mapping IO port bar(1) 00:05:23.685 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:24.623 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:05:24.623 EAL: Ignore mapping IO port bar(1) 00:05:24.623 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:24.623 EAL: Ignore mapping IO port bar(1) 00:05:24.623 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:24.623 EAL: Ignore mapping IO port bar(1) 00:05:24.623 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:24.623 EAL: Ignore mapping IO port bar(1) 00:05:24.623 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:24.623 EAL: Ignore mapping IO port bar(1) 00:05:24.623 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:24.623 EAL: Ignore mapping IO port bar(1) 00:05:24.623 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:24.623 EAL: Ignore mapping IO port bar(1) 00:05:24.623 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:24.623 EAL: Ignore mapping IO port bar(1) 00:05:24.623 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:27.910 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:05:27.910 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001080000 00:05:27.910 Starting DPDK initialization... 00:05:27.910 Starting SPDK post initialization... 00:05:27.910 SPDK NVMe probe 00:05:27.910 Attaching to 0000:5e:00.0 00:05:27.910 Attached to 0000:5e:00.0 00:05:27.910 Cleaning up... 00:05:27.910 00:05:27.910 real 0m4.482s 00:05:27.910 user 0m3.361s 00:05:27.910 sys 0m0.187s 00:05:27.910 18:20:13 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.910 18:20:13 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:27.910 ************************************ 00:05:27.910 END TEST env_dpdk_post_init 00:05:27.910 ************************************ 00:05:27.910 18:20:13 env -- common/autotest_common.sh@1142 -- # return 0 00:05:27.910 18:20:13 env -- env/env.sh@26 -- # uname 00:05:27.910 18:20:13 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:27.910 18:20:13 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:27.910 18:20:13 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.910 18:20:13 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.910 18:20:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.172 ************************************ 00:05:28.172 START TEST env_mem_callbacks 00:05:28.172 ************************************ 00:05:28.172 18:20:13 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:28.172 EAL: Detected CPU lcores: 88 00:05:28.172 EAL: Detected NUMA nodes: 2 00:05:28.172 EAL: Detected shared linkage of DPDK 00:05:28.172 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:28.172 EAL: Selected IOVA mode 'PA' 00:05:28.172 EAL: VFIO support initialized 00:05:28.172 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:05:28.172 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_asym 00:05:28.172 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.172 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_sym 00:05:28.172 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.172 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:05:28.172 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_asym 00:05:28.172 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.172 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_sym 00:05:28.172 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.172 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:05:28.172 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_asym 00:05:28.172 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.172 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_sym 00:05:28.172 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.172 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:05:28.172 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_asym 00:05:28.172 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.172 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_sym 00:05:28.172 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.172 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:05:28.172 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_asym 00:05:28.172 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.172 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_sym 00:05:28.172 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.172 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:05:28.172 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_asym 00:05:28.172 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.172 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_sym 00:05:28.172 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.172 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:05:28.172 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_asym 00:05:28.172 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.172 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_sym 00:05:28.172 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.172 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:05:28.172 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_asym 00:05:28.172 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.172 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_sym 00:05:28.172 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.172 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:05:28.172 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_asym 00:05:28.172 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.172 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_sym 00:05:28.172 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.172 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:05:28.172 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_asym 00:05:28.172 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.172 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_sym 00:05:28.172 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.172 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:05:28.172 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_asym 00:05:28.172 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.172 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_sym 00:05:28.172 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.172 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:05:28.172 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_asym 00:05:28.172 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.172 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_sym 00:05:28.172 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.173 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_asym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_sym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.173 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_asym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_sym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.173 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_asym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_sym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.173 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_asym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_sym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.173 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_asym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_sym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.173 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_asym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_sym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.173 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_asym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_sym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.173 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_asym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_sym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.173 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_asym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_sym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.173 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_asym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_sym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.173 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_asym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_sym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.173 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_asym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_sym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.173 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_asym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_sym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.173 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_asym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_sym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.173 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_asym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_sym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.173 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_asym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_sym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.173 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_asym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_sym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.173 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_asym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_sym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.173 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_asym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_sym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.173 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_asym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_sym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.173 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_asym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_sym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.173 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_asym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_sym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.173 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_asym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_sym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.173 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_asym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_sym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.173 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_asym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_sym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.173 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_asym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_sym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.173 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_asym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_sym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.173 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_asym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_sym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.173 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_asym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_sym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.173 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_asym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.173 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_sym 00:05:28.173 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.173 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:05:28.174 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_asym 00:05:28.174 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.174 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_sym 00:05:28.174 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.174 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:05:28.174 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_asym 00:05:28.174 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.174 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_sym 00:05:28.174 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.174 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:05:28.174 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_asym 00:05:28.174 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.174 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_sym 00:05:28.174 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.174 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:05:28.174 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_asym 00:05:28.174 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.174 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_sym 00:05:28.174 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.174 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:05:28.174 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_asym 00:05:28.174 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.174 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_sym 00:05:28.174 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.174 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:05:28.174 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_asym 00:05:28.174 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.174 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_sym 00:05:28.174 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.174 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:28.174 00:05:28.174 00:05:28.174 CUnit - A unit testing framework for C - Version 2.1-3 00:05:28.174 http://cunit.sourceforge.net/ 00:05:28.174 00:05:28.174 00:05:28.174 Suite: memory 00:05:28.174 Test: test ... 00:05:28.174 register 0x200000200000 2097152 00:05:28.174 malloc 3145728 00:05:28.174 register 0x200000400000 4194304 00:05:28.174 buf 0x200000500000 len 3145728 PASSED 00:05:28.174 malloc 64 00:05:28.174 buf 0x2000004fff40 len 64 PASSED 00:05:28.174 malloc 4194304 00:05:28.174 register 0x200000800000 6291456 00:05:28.174 buf 0x200000a00000 len 4194304 PASSED 00:05:28.174 free 0x200000500000 3145728 00:05:28.174 free 0x2000004fff40 64 00:05:28.174 unregister 0x200000400000 4194304 PASSED 00:05:28.174 free 0x200000a00000 4194304 00:05:28.174 unregister 0x200000800000 6291456 PASSED 00:05:28.174 malloc 8388608 00:05:28.174 register 0x200000400000 10485760 00:05:28.174 buf 0x200000600000 len 8388608 PASSED 00:05:28.174 free 0x200000600000 8388608 00:05:28.174 unregister 0x200000400000 10485760 PASSED 00:05:28.174 passed 00:05:28.174 00:05:28.174 Run Summary: Type Total Ran Passed Failed Inactive 00:05:28.174 suites 1 1 n/a 0 0 00:05:28.174 tests 1 1 1 0 0 00:05:28.174 asserts 15 15 15 0 n/a 00:05:28.174 00:05:28.174 Elapsed time = 0.007 seconds 00:05:28.174 00:05:28.174 real 0m0.088s 00:05:28.174 user 0m0.033s 00:05:28.174 sys 0m0.054s 00:05:28.174 18:20:13 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.174 18:20:13 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:28.174 ************************************ 00:05:28.174 END TEST env_mem_callbacks 00:05:28.174 ************************************ 00:05:28.174 18:20:13 env -- common/autotest_common.sh@1142 -- # return 0 00:05:28.174 00:05:28.174 real 0m6.476s 00:05:28.174 user 0m4.497s 00:05:28.174 sys 0m1.046s 00:05:28.174 18:20:13 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.174 18:20:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.174 ************************************ 00:05:28.174 END TEST env 00:05:28.174 ************************************ 00:05:28.174 18:20:13 -- common/autotest_common.sh@1142 -- # return 0 00:05:28.174 18:20:13 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:05:28.174 18:20:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.174 18:20:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.174 18:20:13 -- common/autotest_common.sh@10 -- # set +x 00:05:28.174 ************************************ 00:05:28.174 START TEST rpc 00:05:28.174 ************************************ 00:05:28.174 18:20:13 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:05:28.433 * Looking for test storage... 00:05:28.433 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:28.433 18:20:13 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2707649 00:05:28.433 18:20:13 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.433 18:20:13 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:28.433 18:20:13 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2707649 00:05:28.433 18:20:13 rpc -- common/autotest_common.sh@829 -- # '[' -z 2707649 ']' 00:05:28.433 18:20:13 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.433 18:20:13 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.433 18:20:13 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.433 18:20:13 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.433 18:20:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.433 [2024-07-15 18:20:13.819564] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:28.433 [2024-07-15 18:20:13.819633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2707649 ] 00:05:28.433 [2024-07-15 18:20:13.920680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.690 [2024-07-15 18:20:14.012678] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:28.690 [2024-07-15 18:20:14.012723] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2707649' to capture a snapshot of events at runtime. 00:05:28.690 [2024-07-15 18:20:14.012734] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:28.690 [2024-07-15 18:20:14.012743] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:28.690 [2024-07-15 18:20:14.012750] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2707649 for offline analysis/debug. 00:05:28.690 [2024-07-15 18:20:14.012775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.256 18:20:14 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.256 18:20:14 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:29.256 18:20:14 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:29.256 18:20:14 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:29.256 18:20:14 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:29.256 18:20:14 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:29.256 18:20:14 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.256 18:20:14 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.256 18:20:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.256 ************************************ 00:05:29.256 START TEST rpc_integrity 00:05:29.256 ************************************ 00:05:29.256 18:20:14 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:29.256 18:20:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:29.256 18:20:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.256 18:20:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.256 18:20:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.256 18:20:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:29.256 18:20:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:29.256 18:20:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:29.256 18:20:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:29.256 18:20:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.256 18:20:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.256 18:20:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.256 18:20:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:29.256 18:20:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:29.256 18:20:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.256 18:20:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.256 18:20:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.256 18:20:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:29.256 { 00:05:29.256 "name": "Malloc0", 00:05:29.256 "aliases": [ 00:05:29.256 "a270d1e6-29ad-457a-acb0-69ac0bdd8eaa" 00:05:29.256 ], 00:05:29.256 "product_name": "Malloc disk", 00:05:29.256 "block_size": 512, 00:05:29.256 "num_blocks": 16384, 00:05:29.256 "uuid": "a270d1e6-29ad-457a-acb0-69ac0bdd8eaa", 00:05:29.256 "assigned_rate_limits": { 00:05:29.256 "rw_ios_per_sec": 0, 00:05:29.256 "rw_mbytes_per_sec": 0, 00:05:29.256 "r_mbytes_per_sec": 0, 00:05:29.256 "w_mbytes_per_sec": 0 00:05:29.256 }, 00:05:29.256 "claimed": false, 00:05:29.256 "zoned": false, 00:05:29.256 "supported_io_types": { 00:05:29.256 "read": true, 00:05:29.256 "write": true, 00:05:29.256 "unmap": true, 00:05:29.256 "flush": true, 00:05:29.256 "reset": true, 00:05:29.256 "nvme_admin": false, 00:05:29.256 "nvme_io": false, 00:05:29.256 "nvme_io_md": false, 00:05:29.256 "write_zeroes": true, 00:05:29.256 "zcopy": true, 00:05:29.256 "get_zone_info": false, 00:05:29.256 "zone_management": false, 00:05:29.256 "zone_append": false, 00:05:29.256 "compare": false, 00:05:29.256 "compare_and_write": false, 00:05:29.256 "abort": true, 00:05:29.256 "seek_hole": false, 00:05:29.256 "seek_data": false, 00:05:29.256 "copy": true, 00:05:29.256 "nvme_iov_md": false 00:05:29.256 }, 00:05:29.256 "memory_domains": [ 00:05:29.256 { 00:05:29.256 "dma_device_id": "system", 00:05:29.256 "dma_device_type": 1 00:05:29.256 }, 00:05:29.256 { 00:05:29.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.256 "dma_device_type": 2 00:05:29.256 } 00:05:29.256 ], 00:05:29.256 "driver_specific": {} 00:05:29.256 } 00:05:29.256 ]' 00:05:29.256 18:20:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:29.514 18:20:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:29.514 18:20:14 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:29.514 18:20:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.514 18:20:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.514 [2024-07-15 18:20:14.846857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:29.514 [2024-07-15 18:20:14.846897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:29.514 [2024-07-15 18:20:14.846913] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa3f0a0 00:05:29.514 [2024-07-15 18:20:14.846923] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:29.514 [2024-07-15 18:20:14.848578] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:29.514 [2024-07-15 18:20:14.848605] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:29.514 Passthru0 00:05:29.514 18:20:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.514 18:20:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:29.514 18:20:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.514 18:20:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.514 18:20:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.514 18:20:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:29.514 { 00:05:29.514 "name": "Malloc0", 00:05:29.514 "aliases": [ 00:05:29.514 "a270d1e6-29ad-457a-acb0-69ac0bdd8eaa" 00:05:29.514 ], 00:05:29.514 "product_name": "Malloc disk", 00:05:29.514 "block_size": 512, 00:05:29.514 "num_blocks": 16384, 00:05:29.514 "uuid": "a270d1e6-29ad-457a-acb0-69ac0bdd8eaa", 00:05:29.514 "assigned_rate_limits": { 00:05:29.514 "rw_ios_per_sec": 0, 00:05:29.514 "rw_mbytes_per_sec": 0, 00:05:29.514 "r_mbytes_per_sec": 0, 00:05:29.514 "w_mbytes_per_sec": 0 00:05:29.514 }, 00:05:29.514 "claimed": true, 00:05:29.514 "claim_type": "exclusive_write", 00:05:29.514 "zoned": false, 00:05:29.514 "supported_io_types": { 00:05:29.514 "read": true, 00:05:29.514 "write": true, 00:05:29.514 "unmap": true, 00:05:29.514 "flush": true, 00:05:29.514 "reset": true, 00:05:29.514 "nvme_admin": false, 00:05:29.514 "nvme_io": false, 00:05:29.514 "nvme_io_md": false, 00:05:29.514 "write_zeroes": true, 00:05:29.514 "zcopy": true, 00:05:29.514 "get_zone_info": false, 00:05:29.514 "zone_management": false, 00:05:29.514 "zone_append": false, 00:05:29.514 "compare": false, 00:05:29.514 "compare_and_write": false, 00:05:29.514 "abort": true, 00:05:29.514 "seek_hole": false, 00:05:29.514 "seek_data": false, 00:05:29.514 "copy": true, 00:05:29.514 "nvme_iov_md": false 00:05:29.514 }, 00:05:29.514 "memory_domains": [ 00:05:29.514 { 00:05:29.514 "dma_device_id": "system", 00:05:29.514 "dma_device_type": 1 00:05:29.514 }, 00:05:29.514 { 00:05:29.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.514 "dma_device_type": 2 00:05:29.514 } 00:05:29.514 ], 00:05:29.514 "driver_specific": {} 00:05:29.514 }, 00:05:29.514 { 00:05:29.514 "name": "Passthru0", 00:05:29.514 "aliases": [ 00:05:29.514 "25a9ba12-0a3e-5a97-8b61-a35aa6fb6aaa" 00:05:29.514 ], 00:05:29.514 "product_name": "passthru", 00:05:29.514 "block_size": 512, 00:05:29.514 "num_blocks": 16384, 00:05:29.514 "uuid": "25a9ba12-0a3e-5a97-8b61-a35aa6fb6aaa", 00:05:29.514 "assigned_rate_limits": { 00:05:29.514 "rw_ios_per_sec": 0, 00:05:29.514 "rw_mbytes_per_sec": 0, 00:05:29.514 "r_mbytes_per_sec": 0, 00:05:29.514 "w_mbytes_per_sec": 0 00:05:29.514 }, 00:05:29.514 "claimed": false, 00:05:29.514 "zoned": false, 00:05:29.514 "supported_io_types": { 00:05:29.514 "read": true, 00:05:29.514 "write": true, 00:05:29.514 "unmap": true, 00:05:29.514 "flush": true, 00:05:29.514 "reset": true, 00:05:29.514 "nvme_admin": false, 00:05:29.514 "nvme_io": false, 00:05:29.514 "nvme_io_md": false, 00:05:29.514 "write_zeroes": true, 00:05:29.514 "zcopy": true, 00:05:29.514 "get_zone_info": false, 00:05:29.514 "zone_management": false, 00:05:29.514 "zone_append": false, 00:05:29.514 "compare": false, 00:05:29.514 "compare_and_write": false, 00:05:29.514 "abort": true, 00:05:29.514 "seek_hole": false, 00:05:29.514 "seek_data": false, 00:05:29.514 "copy": true, 00:05:29.514 "nvme_iov_md": false 00:05:29.514 }, 00:05:29.514 "memory_domains": [ 00:05:29.514 { 00:05:29.514 "dma_device_id": "system", 00:05:29.514 "dma_device_type": 1 00:05:29.514 }, 00:05:29.514 { 00:05:29.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.514 "dma_device_type": 2 00:05:29.514 } 00:05:29.514 ], 00:05:29.514 "driver_specific": { 00:05:29.514 "passthru": { 00:05:29.514 "name": "Passthru0", 00:05:29.514 "base_bdev_name": "Malloc0" 00:05:29.514 } 00:05:29.514 } 00:05:29.514 } 00:05:29.514 ]' 00:05:29.514 18:20:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:29.514 18:20:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:29.514 18:20:14 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:29.514 18:20:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.514 18:20:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.514 18:20:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.514 18:20:14 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:29.514 18:20:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.514 18:20:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.514 18:20:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.514 18:20:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:29.514 18:20:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.514 18:20:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.514 18:20:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.514 18:20:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:29.514 18:20:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:29.514 18:20:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:29.514 00:05:29.514 real 0m0.288s 00:05:29.514 user 0m0.182s 00:05:29.514 sys 0m0.046s 00:05:29.514 18:20:14 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.514 18:20:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.514 ************************************ 00:05:29.514 END TEST rpc_integrity 00:05:29.514 ************************************ 00:05:29.514 18:20:15 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:29.514 18:20:15 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:29.514 18:20:15 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.514 18:20:15 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.514 18:20:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.514 ************************************ 00:05:29.514 START TEST rpc_plugins 00:05:29.514 ************************************ 00:05:29.514 18:20:15 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:29.515 18:20:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:29.772 18:20:15 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.772 18:20:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:29.772 18:20:15 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.772 18:20:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:29.772 18:20:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:29.772 18:20:15 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.772 18:20:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:29.772 18:20:15 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.772 18:20:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:29.772 { 00:05:29.772 "name": "Malloc1", 00:05:29.772 "aliases": [ 00:05:29.772 "e6df32ea-afcd-4da9-b1c3-290594497f24" 00:05:29.772 ], 00:05:29.772 "product_name": "Malloc disk", 00:05:29.772 "block_size": 4096, 00:05:29.772 "num_blocks": 256, 00:05:29.772 "uuid": "e6df32ea-afcd-4da9-b1c3-290594497f24", 00:05:29.772 "assigned_rate_limits": { 00:05:29.772 "rw_ios_per_sec": 0, 00:05:29.772 "rw_mbytes_per_sec": 0, 00:05:29.772 "r_mbytes_per_sec": 0, 00:05:29.772 "w_mbytes_per_sec": 0 00:05:29.772 }, 00:05:29.772 "claimed": false, 00:05:29.772 "zoned": false, 00:05:29.772 "supported_io_types": { 00:05:29.772 "read": true, 00:05:29.772 "write": true, 00:05:29.772 "unmap": true, 00:05:29.772 "flush": true, 00:05:29.772 "reset": true, 00:05:29.772 "nvme_admin": false, 00:05:29.772 "nvme_io": false, 00:05:29.772 "nvme_io_md": false, 00:05:29.772 "write_zeroes": true, 00:05:29.772 "zcopy": true, 00:05:29.772 "get_zone_info": false, 00:05:29.772 "zone_management": false, 00:05:29.772 "zone_append": false, 00:05:29.772 "compare": false, 00:05:29.772 "compare_and_write": false, 00:05:29.772 "abort": true, 00:05:29.772 "seek_hole": false, 00:05:29.772 "seek_data": false, 00:05:29.772 "copy": true, 00:05:29.772 "nvme_iov_md": false 00:05:29.772 }, 00:05:29.772 "memory_domains": [ 00:05:29.772 { 00:05:29.772 "dma_device_id": "system", 00:05:29.772 "dma_device_type": 1 00:05:29.772 }, 00:05:29.772 { 00:05:29.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.772 "dma_device_type": 2 00:05:29.772 } 00:05:29.772 ], 00:05:29.772 "driver_specific": {} 00:05:29.772 } 00:05:29.772 ]' 00:05:29.772 18:20:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:29.772 18:20:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:29.772 18:20:15 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:29.772 18:20:15 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.772 18:20:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:29.772 18:20:15 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.772 18:20:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:29.772 18:20:15 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.772 18:20:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:29.772 18:20:15 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.772 18:20:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:29.772 18:20:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:29.772 18:20:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:29.772 00:05:29.772 real 0m0.143s 00:05:29.772 user 0m0.098s 00:05:29.772 sys 0m0.016s 00:05:29.772 18:20:15 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.772 18:20:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:29.772 ************************************ 00:05:29.772 END TEST rpc_plugins 00:05:29.772 ************************************ 00:05:29.772 18:20:15 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:29.772 18:20:15 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:29.772 18:20:15 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.772 18:20:15 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.772 18:20:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.772 ************************************ 00:05:29.772 START TEST rpc_trace_cmd_test 00:05:29.772 ************************************ 00:05:29.772 18:20:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:29.772 18:20:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:29.772 18:20:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:29.772 18:20:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.772 18:20:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:29.772 18:20:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.772 18:20:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:29.772 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2707649", 00:05:29.772 "tpoint_group_mask": "0x8", 00:05:29.772 "iscsi_conn": { 00:05:29.773 "mask": "0x2", 00:05:29.773 "tpoint_mask": "0x0" 00:05:29.773 }, 00:05:29.773 "scsi": { 00:05:29.773 "mask": "0x4", 00:05:29.773 "tpoint_mask": "0x0" 00:05:29.773 }, 00:05:29.773 "bdev": { 00:05:29.773 "mask": "0x8", 00:05:29.773 "tpoint_mask": "0xffffffffffffffff" 00:05:29.773 }, 00:05:29.773 "nvmf_rdma": { 00:05:29.773 "mask": "0x10", 00:05:29.773 "tpoint_mask": "0x0" 00:05:29.773 }, 00:05:29.773 "nvmf_tcp": { 00:05:29.773 "mask": "0x20", 00:05:29.773 "tpoint_mask": "0x0" 00:05:29.773 }, 00:05:29.773 "ftl": { 00:05:29.773 "mask": "0x40", 00:05:29.773 "tpoint_mask": "0x0" 00:05:29.773 }, 00:05:29.773 "blobfs": { 00:05:29.773 "mask": "0x80", 00:05:29.773 "tpoint_mask": "0x0" 00:05:29.773 }, 00:05:29.773 "dsa": { 00:05:29.773 "mask": "0x200", 00:05:29.773 "tpoint_mask": "0x0" 00:05:29.773 }, 00:05:29.773 "thread": { 00:05:29.773 "mask": "0x400", 00:05:29.773 "tpoint_mask": "0x0" 00:05:29.773 }, 00:05:29.773 "nvme_pcie": { 00:05:29.773 "mask": "0x800", 00:05:29.773 "tpoint_mask": "0x0" 00:05:29.773 }, 00:05:29.773 "iaa": { 00:05:29.773 "mask": "0x1000", 00:05:29.773 "tpoint_mask": "0x0" 00:05:29.773 }, 00:05:29.773 "nvme_tcp": { 00:05:29.773 "mask": "0x2000", 00:05:29.773 "tpoint_mask": "0x0" 00:05:29.773 }, 00:05:29.773 "bdev_nvme": { 00:05:29.773 "mask": "0x4000", 00:05:29.773 "tpoint_mask": "0x0" 00:05:29.773 }, 00:05:29.773 "sock": { 00:05:29.773 "mask": "0x8000", 00:05:29.773 "tpoint_mask": "0x0" 00:05:29.773 } 00:05:29.773 }' 00:05:29.773 18:20:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:30.033 18:20:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:30.033 18:20:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:30.033 18:20:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:30.033 18:20:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:30.033 18:20:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:30.033 18:20:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:30.033 18:20:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:30.033 18:20:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:30.033 18:20:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:30.033 00:05:30.033 real 0m0.250s 00:05:30.033 user 0m0.216s 00:05:30.033 sys 0m0.028s 00:05:30.033 18:20:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.033 18:20:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:30.033 ************************************ 00:05:30.033 END TEST rpc_trace_cmd_test 00:05:30.033 ************************************ 00:05:30.033 18:20:15 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:30.033 18:20:15 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:30.033 18:20:15 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:30.033 18:20:15 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:30.033 18:20:15 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.033 18:20:15 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.033 18:20:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.291 ************************************ 00:05:30.291 START TEST rpc_daemon_integrity 00:05:30.291 ************************************ 00:05:30.291 18:20:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:30.291 18:20:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:30.291 18:20:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.291 18:20:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.291 18:20:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.291 18:20:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:30.291 18:20:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:30.291 18:20:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:30.291 18:20:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:30.291 18:20:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.291 18:20:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.291 18:20:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.291 18:20:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:30.291 18:20:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:30.291 18:20:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.291 18:20:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.291 18:20:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.291 18:20:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:30.291 { 00:05:30.291 "name": "Malloc2", 00:05:30.291 "aliases": [ 00:05:30.291 "645f80e3-338e-4f31-8ce3-976ae6645026" 00:05:30.291 ], 00:05:30.291 "product_name": "Malloc disk", 00:05:30.291 "block_size": 512, 00:05:30.291 "num_blocks": 16384, 00:05:30.291 "uuid": "645f80e3-338e-4f31-8ce3-976ae6645026", 00:05:30.291 "assigned_rate_limits": { 00:05:30.291 "rw_ios_per_sec": 0, 00:05:30.291 "rw_mbytes_per_sec": 0, 00:05:30.291 "r_mbytes_per_sec": 0, 00:05:30.291 "w_mbytes_per_sec": 0 00:05:30.291 }, 00:05:30.291 "claimed": false, 00:05:30.291 "zoned": false, 00:05:30.291 "supported_io_types": { 00:05:30.291 "read": true, 00:05:30.291 "write": true, 00:05:30.291 "unmap": true, 00:05:30.291 "flush": true, 00:05:30.291 "reset": true, 00:05:30.291 "nvme_admin": false, 00:05:30.292 "nvme_io": false, 00:05:30.292 "nvme_io_md": false, 00:05:30.292 "write_zeroes": true, 00:05:30.292 "zcopy": true, 00:05:30.292 "get_zone_info": false, 00:05:30.292 "zone_management": false, 00:05:30.292 "zone_append": false, 00:05:30.292 "compare": false, 00:05:30.292 "compare_and_write": false, 00:05:30.292 "abort": true, 00:05:30.292 "seek_hole": false, 00:05:30.292 "seek_data": false, 00:05:30.292 "copy": true, 00:05:30.292 "nvme_iov_md": false 00:05:30.292 }, 00:05:30.292 "memory_domains": [ 00:05:30.292 { 00:05:30.292 "dma_device_id": "system", 00:05:30.292 "dma_device_type": 1 00:05:30.292 }, 00:05:30.292 { 00:05:30.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.292 "dma_device_type": 2 00:05:30.292 } 00:05:30.292 ], 00:05:30.292 "driver_specific": {} 00:05:30.292 } 00:05:30.292 ]' 00:05:30.292 18:20:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:30.292 18:20:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:30.292 18:20:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:30.292 18:20:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.292 18:20:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.292 [2024-07-15 18:20:15.737431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:30.292 [2024-07-15 18:20:15.737467] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:30.292 [2024-07-15 18:20:15.737483] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbe8700 00:05:30.292 [2024-07-15 18:20:15.737492] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:30.292 [2024-07-15 18:20:15.738933] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:30.292 [2024-07-15 18:20:15.738965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:30.292 Passthru0 00:05:30.292 18:20:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.292 18:20:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:30.292 18:20:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.292 18:20:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.292 18:20:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.292 18:20:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:30.292 { 00:05:30.292 "name": "Malloc2", 00:05:30.292 "aliases": [ 00:05:30.292 "645f80e3-338e-4f31-8ce3-976ae6645026" 00:05:30.292 ], 00:05:30.292 "product_name": "Malloc disk", 00:05:30.292 "block_size": 512, 00:05:30.292 "num_blocks": 16384, 00:05:30.292 "uuid": "645f80e3-338e-4f31-8ce3-976ae6645026", 00:05:30.292 "assigned_rate_limits": { 00:05:30.292 "rw_ios_per_sec": 0, 00:05:30.292 "rw_mbytes_per_sec": 0, 00:05:30.292 "r_mbytes_per_sec": 0, 00:05:30.292 "w_mbytes_per_sec": 0 00:05:30.292 }, 00:05:30.292 "claimed": true, 00:05:30.292 "claim_type": "exclusive_write", 00:05:30.292 "zoned": false, 00:05:30.292 "supported_io_types": { 00:05:30.292 "read": true, 00:05:30.292 "write": true, 00:05:30.292 "unmap": true, 00:05:30.292 "flush": true, 00:05:30.292 "reset": true, 00:05:30.292 "nvme_admin": false, 00:05:30.292 "nvme_io": false, 00:05:30.292 "nvme_io_md": false, 00:05:30.292 "write_zeroes": true, 00:05:30.292 "zcopy": true, 00:05:30.292 "get_zone_info": false, 00:05:30.292 "zone_management": false, 00:05:30.292 "zone_append": false, 00:05:30.292 "compare": false, 00:05:30.292 "compare_and_write": false, 00:05:30.292 "abort": true, 00:05:30.292 "seek_hole": false, 00:05:30.292 "seek_data": false, 00:05:30.292 "copy": true, 00:05:30.292 "nvme_iov_md": false 00:05:30.292 }, 00:05:30.292 "memory_domains": [ 00:05:30.292 { 00:05:30.292 "dma_device_id": "system", 00:05:30.292 "dma_device_type": 1 00:05:30.292 }, 00:05:30.292 { 00:05:30.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.292 "dma_device_type": 2 00:05:30.292 } 00:05:30.292 ], 00:05:30.292 "driver_specific": {} 00:05:30.292 }, 00:05:30.292 { 00:05:30.292 "name": "Passthru0", 00:05:30.292 "aliases": [ 00:05:30.292 "bc01403f-c508-58ea-8b24-a2eda0f3123d" 00:05:30.292 ], 00:05:30.292 "product_name": "passthru", 00:05:30.292 "block_size": 512, 00:05:30.292 "num_blocks": 16384, 00:05:30.292 "uuid": "bc01403f-c508-58ea-8b24-a2eda0f3123d", 00:05:30.292 "assigned_rate_limits": { 00:05:30.292 "rw_ios_per_sec": 0, 00:05:30.292 "rw_mbytes_per_sec": 0, 00:05:30.292 "r_mbytes_per_sec": 0, 00:05:30.292 "w_mbytes_per_sec": 0 00:05:30.292 }, 00:05:30.292 "claimed": false, 00:05:30.292 "zoned": false, 00:05:30.292 "supported_io_types": { 00:05:30.292 "read": true, 00:05:30.292 "write": true, 00:05:30.292 "unmap": true, 00:05:30.292 "flush": true, 00:05:30.292 "reset": true, 00:05:30.292 "nvme_admin": false, 00:05:30.292 "nvme_io": false, 00:05:30.292 "nvme_io_md": false, 00:05:30.292 "write_zeroes": true, 00:05:30.292 "zcopy": true, 00:05:30.292 "get_zone_info": false, 00:05:30.292 "zone_management": false, 00:05:30.292 "zone_append": false, 00:05:30.292 "compare": false, 00:05:30.292 "compare_and_write": false, 00:05:30.292 "abort": true, 00:05:30.292 "seek_hole": false, 00:05:30.292 "seek_data": false, 00:05:30.292 "copy": true, 00:05:30.292 "nvme_iov_md": false 00:05:30.292 }, 00:05:30.292 "memory_domains": [ 00:05:30.292 { 00:05:30.292 "dma_device_id": "system", 00:05:30.292 "dma_device_type": 1 00:05:30.292 }, 00:05:30.292 { 00:05:30.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.292 "dma_device_type": 2 00:05:30.292 } 00:05:30.292 ], 00:05:30.292 "driver_specific": { 00:05:30.292 "passthru": { 00:05:30.292 "name": "Passthru0", 00:05:30.292 "base_bdev_name": "Malloc2" 00:05:30.292 } 00:05:30.292 } 00:05:30.292 } 00:05:30.292 ]' 00:05:30.292 18:20:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:30.292 18:20:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:30.292 18:20:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:30.292 18:20:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.292 18:20:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.292 18:20:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.292 18:20:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:30.292 18:20:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.292 18:20:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.292 18:20:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.292 18:20:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:30.292 18:20:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.292 18:20:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.292 18:20:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.292 18:20:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:30.292 18:20:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:30.551 18:20:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:30.551 00:05:30.551 real 0m0.287s 00:05:30.551 user 0m0.201s 00:05:30.551 sys 0m0.033s 00:05:30.551 18:20:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.551 18:20:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.551 ************************************ 00:05:30.551 END TEST rpc_daemon_integrity 00:05:30.551 ************************************ 00:05:30.551 18:20:15 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:30.551 18:20:15 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:30.551 18:20:15 rpc -- rpc/rpc.sh@84 -- # killprocess 2707649 00:05:30.551 18:20:15 rpc -- common/autotest_common.sh@948 -- # '[' -z 2707649 ']' 00:05:30.551 18:20:15 rpc -- common/autotest_common.sh@952 -- # kill -0 2707649 00:05:30.551 18:20:15 rpc -- common/autotest_common.sh@953 -- # uname 00:05:30.551 18:20:15 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:30.551 18:20:15 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2707649 00:05:30.551 18:20:15 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:30.551 18:20:15 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:30.551 18:20:15 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2707649' 00:05:30.551 killing process with pid 2707649 00:05:30.551 18:20:15 rpc -- common/autotest_common.sh@967 -- # kill 2707649 00:05:30.551 18:20:15 rpc -- common/autotest_common.sh@972 -- # wait 2707649 00:05:30.810 00:05:30.810 real 0m2.639s 00:05:30.810 user 0m3.458s 00:05:30.810 sys 0m0.725s 00:05:30.810 18:20:16 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.810 18:20:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.810 ************************************ 00:05:30.810 END TEST rpc 00:05:30.810 ************************************ 00:05:30.810 18:20:16 -- common/autotest_common.sh@1142 -- # return 0 00:05:30.810 18:20:16 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:30.810 18:20:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.810 18:20:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.810 18:20:16 -- common/autotest_common.sh@10 -- # set +x 00:05:31.069 ************************************ 00:05:31.069 START TEST skip_rpc 00:05:31.069 ************************************ 00:05:31.069 18:20:16 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:31.069 * Looking for test storage... 00:05:31.069 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:31.069 18:20:16 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:05:31.069 18:20:16 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:05:31.069 18:20:16 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:31.069 18:20:16 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.069 18:20:16 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.069 18:20:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.069 ************************************ 00:05:31.069 START TEST skip_rpc 00:05:31.069 ************************************ 00:05:31.069 18:20:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:31.069 18:20:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2708268 00:05:31.069 18:20:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.069 18:20:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:31.069 18:20:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:31.069 [2024-07-15 18:20:16.567825] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:31.069 [2024-07-15 18:20:16.567887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2708268 ] 00:05:31.328 [2024-07-15 18:20:16.667793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.328 [2024-07-15 18:20:16.762556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.597 18:20:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:36.597 18:20:21 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:36.597 18:20:21 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:36.597 18:20:21 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:36.597 18:20:21 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.597 18:20:21 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:36.597 18:20:21 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.597 18:20:21 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:36.597 18:20:21 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.598 18:20:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.598 18:20:21 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:36.598 18:20:21 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:36.598 18:20:21 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:36.598 18:20:21 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:36.598 18:20:21 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:36.598 18:20:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:36.598 18:20:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2708268 00:05:36.598 18:20:21 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 2708268 ']' 00:05:36.598 18:20:21 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 2708268 00:05:36.598 18:20:21 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:36.598 18:20:21 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:36.598 18:20:21 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2708268 00:05:36.598 18:20:21 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:36.598 18:20:21 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:36.598 18:20:21 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2708268' 00:05:36.598 killing process with pid 2708268 00:05:36.598 18:20:21 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 2708268 00:05:36.598 18:20:21 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 2708268 00:05:36.598 00:05:36.598 real 0m5.394s 00:05:36.598 user 0m5.111s 00:05:36.598 sys 0m0.299s 00:05:36.598 18:20:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.598 18:20:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.598 ************************************ 00:05:36.598 END TEST skip_rpc 00:05:36.598 ************************************ 00:05:36.598 18:20:21 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:36.598 18:20:21 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:36.598 18:20:21 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.598 18:20:21 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.598 18:20:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.598 ************************************ 00:05:36.598 START TEST skip_rpc_with_json 00:05:36.598 ************************************ 00:05:36.598 18:20:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:36.598 18:20:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:36.598 18:20:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2709167 00:05:36.598 18:20:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.598 18:20:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.598 18:20:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2709167 00:05:36.598 18:20:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 2709167 ']' 00:05:36.598 18:20:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.598 18:20:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.598 18:20:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.598 18:20:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.598 18:20:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:36.598 [2024-07-15 18:20:22.077481] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:36.598 [2024-07-15 18:20:22.077593] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2709167 ] 00:05:36.857 [2024-07-15 18:20:22.214177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.857 [2024-07-15 18:20:22.305906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.424 18:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.424 18:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:37.424 18:20:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:37.424 18:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.424 18:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.424 [2024-07-15 18:20:22.974220] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:37.687 request: 00:05:37.687 { 00:05:37.687 "trtype": "tcp", 00:05:37.687 "method": "nvmf_get_transports", 00:05:37.687 "req_id": 1 00:05:37.687 } 00:05:37.687 Got JSON-RPC error response 00:05:37.687 response: 00:05:37.687 { 00:05:37.687 "code": -19, 00:05:37.687 "message": "No such device" 00:05:37.687 } 00:05:37.687 18:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:37.687 18:20:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:37.687 18:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.687 18:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.687 [2024-07-15 18:20:22.986369] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:37.687 18:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.687 18:20:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:37.687 18:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.687 18:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.687 18:20:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.687 18:20:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:05:37.687 { 00:05:37.687 "subsystems": [ 00:05:37.687 { 00:05:37.687 "subsystem": "keyring", 00:05:37.687 "config": [] 00:05:37.687 }, 00:05:37.687 { 00:05:37.687 "subsystem": "iobuf", 00:05:37.687 "config": [ 00:05:37.687 { 00:05:37.687 "method": "iobuf_set_options", 00:05:37.687 "params": { 00:05:37.687 "small_pool_count": 8192, 00:05:37.687 "large_pool_count": 1024, 00:05:37.687 "small_bufsize": 8192, 00:05:37.687 "large_bufsize": 135168 00:05:37.687 } 00:05:37.687 } 00:05:37.687 ] 00:05:37.687 }, 00:05:37.687 { 00:05:37.687 "subsystem": "sock", 00:05:37.687 "config": [ 00:05:37.687 { 00:05:37.687 "method": "sock_set_default_impl", 00:05:37.687 "params": { 00:05:37.687 "impl_name": "posix" 00:05:37.687 } 00:05:37.687 }, 00:05:37.687 { 00:05:37.687 "method": "sock_impl_set_options", 00:05:37.687 "params": { 00:05:37.687 "impl_name": "ssl", 00:05:37.687 "recv_buf_size": 4096, 00:05:37.687 "send_buf_size": 4096, 00:05:37.687 "enable_recv_pipe": true, 00:05:37.687 "enable_quickack": false, 00:05:37.687 "enable_placement_id": 0, 00:05:37.687 "enable_zerocopy_send_server": true, 00:05:37.687 "enable_zerocopy_send_client": false, 00:05:37.687 "zerocopy_threshold": 0, 00:05:37.687 "tls_version": 0, 00:05:37.687 "enable_ktls": false 00:05:37.687 } 00:05:37.687 }, 00:05:37.687 { 00:05:37.687 "method": "sock_impl_set_options", 00:05:37.687 "params": { 00:05:37.687 "impl_name": "posix", 00:05:37.687 "recv_buf_size": 2097152, 00:05:37.687 "send_buf_size": 2097152, 00:05:37.687 "enable_recv_pipe": true, 00:05:37.687 "enable_quickack": false, 00:05:37.687 "enable_placement_id": 0, 00:05:37.687 "enable_zerocopy_send_server": true, 00:05:37.687 "enable_zerocopy_send_client": false, 00:05:37.687 "zerocopy_threshold": 0, 00:05:37.687 "tls_version": 0, 00:05:37.687 "enable_ktls": false 00:05:37.687 } 00:05:37.687 } 00:05:37.687 ] 00:05:37.687 }, 00:05:37.687 { 00:05:37.687 "subsystem": "vmd", 00:05:37.687 "config": [] 00:05:37.687 }, 00:05:37.687 { 00:05:37.687 "subsystem": "accel", 00:05:37.687 "config": [ 00:05:37.687 { 00:05:37.687 "method": "accel_set_options", 00:05:37.687 "params": { 00:05:37.687 "small_cache_size": 128, 00:05:37.687 "large_cache_size": 16, 00:05:37.687 "task_count": 2048, 00:05:37.687 "sequence_count": 2048, 00:05:37.687 "buf_count": 2048 00:05:37.687 } 00:05:37.687 } 00:05:37.687 ] 00:05:37.687 }, 00:05:37.687 { 00:05:37.687 "subsystem": "bdev", 00:05:37.687 "config": [ 00:05:37.687 { 00:05:37.687 "method": "bdev_set_options", 00:05:37.687 "params": { 00:05:37.687 "bdev_io_pool_size": 65535, 00:05:37.687 "bdev_io_cache_size": 256, 00:05:37.687 "bdev_auto_examine": true, 00:05:37.687 "iobuf_small_cache_size": 128, 00:05:37.687 "iobuf_large_cache_size": 16 00:05:37.687 } 00:05:37.687 }, 00:05:37.687 { 00:05:37.687 "method": "bdev_raid_set_options", 00:05:37.687 "params": { 00:05:37.687 "process_window_size_kb": 1024 00:05:37.687 } 00:05:37.687 }, 00:05:37.687 { 00:05:37.687 "method": "bdev_iscsi_set_options", 00:05:37.687 "params": { 00:05:37.687 "timeout_sec": 30 00:05:37.687 } 00:05:37.687 }, 00:05:37.687 { 00:05:37.687 "method": "bdev_nvme_set_options", 00:05:37.687 "params": { 00:05:37.687 "action_on_timeout": "none", 00:05:37.687 "timeout_us": 0, 00:05:37.687 "timeout_admin_us": 0, 00:05:37.687 "keep_alive_timeout_ms": 10000, 00:05:37.687 "arbitration_burst": 0, 00:05:37.687 "low_priority_weight": 0, 00:05:37.687 "medium_priority_weight": 0, 00:05:37.687 "high_priority_weight": 0, 00:05:37.687 "nvme_adminq_poll_period_us": 10000, 00:05:37.687 "nvme_ioq_poll_period_us": 0, 00:05:37.687 "io_queue_requests": 0, 00:05:37.687 "delay_cmd_submit": true, 00:05:37.687 "transport_retry_count": 4, 00:05:37.687 "bdev_retry_count": 3, 00:05:37.687 "transport_ack_timeout": 0, 00:05:37.687 "ctrlr_loss_timeout_sec": 0, 00:05:37.687 "reconnect_delay_sec": 0, 00:05:37.687 "fast_io_fail_timeout_sec": 0, 00:05:37.687 "disable_auto_failback": false, 00:05:37.687 "generate_uuids": false, 00:05:37.687 "transport_tos": 0, 00:05:37.687 "nvme_error_stat": false, 00:05:37.687 "rdma_srq_size": 0, 00:05:37.687 "io_path_stat": false, 00:05:37.687 "allow_accel_sequence": false, 00:05:37.687 "rdma_max_cq_size": 0, 00:05:37.687 "rdma_cm_event_timeout_ms": 0, 00:05:37.687 "dhchap_digests": [ 00:05:37.687 "sha256", 00:05:37.687 "sha384", 00:05:37.687 "sha512" 00:05:37.687 ], 00:05:37.687 "dhchap_dhgroups": [ 00:05:37.687 "null", 00:05:37.687 "ffdhe2048", 00:05:37.687 "ffdhe3072", 00:05:37.687 "ffdhe4096", 00:05:37.687 "ffdhe6144", 00:05:37.687 "ffdhe8192" 00:05:37.687 ] 00:05:37.687 } 00:05:37.687 }, 00:05:37.687 { 00:05:37.687 "method": "bdev_nvme_set_hotplug", 00:05:37.687 "params": { 00:05:37.687 "period_us": 100000, 00:05:37.687 "enable": false 00:05:37.687 } 00:05:37.687 }, 00:05:37.687 { 00:05:37.687 "method": "bdev_wait_for_examine" 00:05:37.687 } 00:05:37.687 ] 00:05:37.687 }, 00:05:37.687 { 00:05:37.687 "subsystem": "scsi", 00:05:37.687 "config": null 00:05:37.687 }, 00:05:37.687 { 00:05:37.687 "subsystem": "scheduler", 00:05:37.687 "config": [ 00:05:37.687 { 00:05:37.687 "method": "framework_set_scheduler", 00:05:37.687 "params": { 00:05:37.687 "name": "static" 00:05:37.687 } 00:05:37.687 } 00:05:37.687 ] 00:05:37.687 }, 00:05:37.687 { 00:05:37.687 "subsystem": "vhost_scsi", 00:05:37.687 "config": [] 00:05:37.687 }, 00:05:37.687 { 00:05:37.687 "subsystem": "vhost_blk", 00:05:37.687 "config": [] 00:05:37.687 }, 00:05:37.687 { 00:05:37.687 "subsystem": "ublk", 00:05:37.687 "config": [] 00:05:37.687 }, 00:05:37.687 { 00:05:37.687 "subsystem": "nbd", 00:05:37.687 "config": [] 00:05:37.687 }, 00:05:37.687 { 00:05:37.687 "subsystem": "nvmf", 00:05:37.687 "config": [ 00:05:37.687 { 00:05:37.687 "method": "nvmf_set_config", 00:05:37.687 "params": { 00:05:37.688 "discovery_filter": "match_any", 00:05:37.688 "admin_cmd_passthru": { 00:05:37.688 "identify_ctrlr": false 00:05:37.688 } 00:05:37.688 } 00:05:37.688 }, 00:05:37.688 { 00:05:37.688 "method": "nvmf_set_max_subsystems", 00:05:37.688 "params": { 00:05:37.688 "max_subsystems": 1024 00:05:37.688 } 00:05:37.688 }, 00:05:37.688 { 00:05:37.688 "method": "nvmf_set_crdt", 00:05:37.688 "params": { 00:05:37.688 "crdt1": 0, 00:05:37.688 "crdt2": 0, 00:05:37.688 "crdt3": 0 00:05:37.688 } 00:05:37.688 }, 00:05:37.688 { 00:05:37.688 "method": "nvmf_create_transport", 00:05:37.688 "params": { 00:05:37.688 "trtype": "TCP", 00:05:37.688 "max_queue_depth": 128, 00:05:37.688 "max_io_qpairs_per_ctrlr": 127, 00:05:37.688 "in_capsule_data_size": 4096, 00:05:37.688 "max_io_size": 131072, 00:05:37.688 "io_unit_size": 131072, 00:05:37.688 "max_aq_depth": 128, 00:05:37.688 "num_shared_buffers": 511, 00:05:37.688 "buf_cache_size": 4294967295, 00:05:37.688 "dif_insert_or_strip": false, 00:05:37.688 "zcopy": false, 00:05:37.688 "c2h_success": true, 00:05:37.688 "sock_priority": 0, 00:05:37.688 "abort_timeout_sec": 1, 00:05:37.688 "ack_timeout": 0, 00:05:37.688 "data_wr_pool_size": 0 00:05:37.688 } 00:05:37.688 } 00:05:37.688 ] 00:05:37.688 }, 00:05:37.688 { 00:05:37.688 "subsystem": "iscsi", 00:05:37.688 "config": [ 00:05:37.688 { 00:05:37.688 "method": "iscsi_set_options", 00:05:37.688 "params": { 00:05:37.688 "node_base": "iqn.2016-06.io.spdk", 00:05:37.688 "max_sessions": 128, 00:05:37.688 "max_connections_per_session": 2, 00:05:37.688 "max_queue_depth": 64, 00:05:37.688 "default_time2wait": 2, 00:05:37.688 "default_time2retain": 20, 00:05:37.688 "first_burst_length": 8192, 00:05:37.688 "immediate_data": true, 00:05:37.688 "allow_duplicated_isid": false, 00:05:37.688 "error_recovery_level": 0, 00:05:37.688 "nop_timeout": 60, 00:05:37.688 "nop_in_interval": 30, 00:05:37.688 "disable_chap": false, 00:05:37.688 "require_chap": false, 00:05:37.688 "mutual_chap": false, 00:05:37.688 "chap_group": 0, 00:05:37.688 "max_large_datain_per_connection": 64, 00:05:37.688 "max_r2t_per_connection": 4, 00:05:37.688 "pdu_pool_size": 36864, 00:05:37.688 "immediate_data_pool_size": 16384, 00:05:37.688 "data_out_pool_size": 2048 00:05:37.688 } 00:05:37.688 } 00:05:37.688 ] 00:05:37.688 } 00:05:37.688 ] 00:05:37.688 } 00:05:37.688 18:20:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:37.688 18:20:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2709167 00:05:37.688 18:20:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2709167 ']' 00:05:37.688 18:20:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2709167 00:05:37.688 18:20:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:37.688 18:20:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:37.688 18:20:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2709167 00:05:37.688 18:20:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:37.688 18:20:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:37.688 18:20:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2709167' 00:05:37.688 killing process with pid 2709167 00:05:37.688 18:20:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2709167 00:05:37.688 18:20:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2709167 00:05:38.280 18:20:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2709408 00:05:38.280 18:20:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:38.280 18:20:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:05:43.549 18:20:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2709408 00:05:43.549 18:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2709408 ']' 00:05:43.549 18:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2709408 00:05:43.549 18:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:43.549 18:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.549 18:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2709408 00:05:43.549 18:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:43.549 18:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:43.549 18:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2709408' 00:05:43.549 killing process with pid 2709408 00:05:43.549 18:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2709408 00:05:43.549 18:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2709408 00:05:43.549 18:20:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:05:43.549 18:20:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:05:43.549 00:05:43.549 real 0m6.951s 00:05:43.549 user 0m6.769s 00:05:43.549 sys 0m0.732s 00:05:43.549 18:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.549 18:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:43.549 ************************************ 00:05:43.549 END TEST skip_rpc_with_json 00:05:43.549 ************************************ 00:05:43.549 18:20:28 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:43.549 18:20:28 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:43.549 18:20:28 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.549 18:20:28 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.549 18:20:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.549 ************************************ 00:05:43.549 START TEST skip_rpc_with_delay 00:05:43.549 ************************************ 00:05:43.549 18:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:43.549 18:20:28 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:43.550 18:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:43.550 18:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:43.550 18:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.550 18:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.550 18:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.550 18:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.550 18:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.550 18:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.550 18:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.550 18:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:43.550 18:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:43.550 [2024-07-15 18:20:29.052372] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:43.550 [2024-07-15 18:20:29.052464] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:43.550 18:20:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:43.550 18:20:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:43.550 18:20:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:43.550 18:20:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:43.550 00:05:43.550 real 0m0.088s 00:05:43.550 user 0m0.062s 00:05:43.550 sys 0m0.026s 00:05:43.550 18:20:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.550 18:20:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:43.550 ************************************ 00:05:43.550 END TEST skip_rpc_with_delay 00:05:43.550 ************************************ 00:05:43.550 18:20:29 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:43.807 18:20:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:43.807 18:20:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:43.807 18:20:29 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:43.807 18:20:29 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.807 18:20:29 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.807 18:20:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.807 ************************************ 00:05:43.807 START TEST exit_on_failed_rpc_init 00:05:43.807 ************************************ 00:05:43.807 18:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:43.807 18:20:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2710329 00:05:43.807 18:20:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2710329 00:05:43.808 18:20:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.808 18:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 2710329 ']' 00:05:43.808 18:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.808 18:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.808 18:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.808 18:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.808 18:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:43.808 [2024-07-15 18:20:29.208207] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:43.808 [2024-07-15 18:20:29.208266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2710329 ] 00:05:43.808 [2024-07-15 18:20:29.310537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.066 [2024-07-15 18:20:29.400647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.631 18:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.631 18:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:44.631 18:20:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.631 18:20:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:44.631 18:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:44.631 18:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:44.631 18:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.631 18:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.631 18:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.631 18:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.631 18:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.631 18:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.631 18:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.631 18:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:44.631 18:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:44.888 [2024-07-15 18:20:30.223427] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:44.888 [2024-07-15 18:20:30.223488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2710549 ] 00:05:44.888 [2024-07-15 18:20:30.327977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.888 [2024-07-15 18:20:30.433939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.888 [2024-07-15 18:20:30.434036] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:44.888 [2024-07-15 18:20:30.434053] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:44.888 [2024-07-15 18:20:30.434064] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:45.147 18:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:45.147 18:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:45.147 18:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:45.147 18:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:45.147 18:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:45.147 18:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:45.147 18:20:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:45.147 18:20:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2710329 00:05:45.147 18:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 2710329 ']' 00:05:45.147 18:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 2710329 00:05:45.147 18:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:45.147 18:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:45.147 18:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2710329 00:05:45.147 18:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:45.147 18:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:45.147 18:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2710329' 00:05:45.147 killing process with pid 2710329 00:05:45.147 18:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 2710329 00:05:45.147 18:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 2710329 00:05:45.405 00:05:45.405 real 0m1.791s 00:05:45.405 user 0m2.190s 00:05:45.405 sys 0m0.506s 00:05:45.405 18:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.405 18:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:45.405 ************************************ 00:05:45.405 END TEST exit_on_failed_rpc_init 00:05:45.405 ************************************ 00:05:45.664 18:20:30 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:45.664 18:20:30 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:05:45.664 00:05:45.664 real 0m14.599s 00:05:45.664 user 0m14.273s 00:05:45.664 sys 0m1.822s 00:05:45.664 18:20:30 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.664 18:20:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.664 ************************************ 00:05:45.664 END TEST skip_rpc 00:05:45.664 ************************************ 00:05:45.664 18:20:30 -- common/autotest_common.sh@1142 -- # return 0 00:05:45.664 18:20:31 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:45.664 18:20:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.664 18:20:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.664 18:20:31 -- common/autotest_common.sh@10 -- # set +x 00:05:45.664 ************************************ 00:05:45.664 START TEST rpc_client 00:05:45.664 ************************************ 00:05:45.664 18:20:31 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:45.664 * Looking for test storage... 00:05:45.664 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client 00:05:45.664 18:20:31 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:45.664 OK 00:05:45.664 18:20:31 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:45.664 00:05:45.664 real 0m0.113s 00:05:45.664 user 0m0.052s 00:05:45.664 sys 0m0.069s 00:05:45.664 18:20:31 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.664 18:20:31 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:45.664 ************************************ 00:05:45.664 END TEST rpc_client 00:05:45.664 ************************************ 00:05:45.664 18:20:31 -- common/autotest_common.sh@1142 -- # return 0 00:05:45.664 18:20:31 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:05:45.664 18:20:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.664 18:20:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.664 18:20:31 -- common/autotest_common.sh@10 -- # set +x 00:05:45.923 ************************************ 00:05:45.923 START TEST json_config 00:05:45.923 ************************************ 00:05:45.923 18:20:31 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:05:45.923 18:20:31 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:05:45.923 18:20:31 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:45.923 18:20:31 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:45.923 18:20:31 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:45.923 18:20:31 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:45.923 18:20:31 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:45.923 18:20:31 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:45.923 18:20:31 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:45.923 18:20:31 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:45.923 18:20:31 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:45.923 18:20:31 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:45.923 18:20:31 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:45.923 18:20:31 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b98b40-9a1d-eb11-906e-0017a4403562 00:05:45.923 18:20:31 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80b98b40-9a1d-eb11-906e-0017a4403562 00:05:45.923 18:20:31 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:45.923 18:20:31 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:45.923 18:20:31 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:45.923 18:20:31 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:45.923 18:20:31 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:05:45.923 18:20:31 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:45.923 18:20:31 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.923 18:20:31 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.923 18:20:31 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.923 18:20:31 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.923 18:20:31 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.923 18:20:31 json_config -- paths/export.sh@5 -- # export PATH 00:05:45.923 18:20:31 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.923 18:20:31 json_config -- nvmf/common.sh@47 -- # : 0 00:05:45.923 18:20:31 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:45.923 18:20:31 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:45.923 18:20:31 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:45.923 18:20:31 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:45.923 18:20:31 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:45.923 18:20:31 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:45.923 18:20:31 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:45.923 18:20:31 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:45.923 18:20:31 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:05:45.923 18:20:31 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:45.923 18:20:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:45.923 18:20:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:45.923 18:20:31 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:45.923 18:20:31 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:45.923 18:20:31 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:45.923 18:20:31 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:45.923 18:20:31 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:45.924 18:20:31 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:45.924 18:20:31 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:45.924 18:20:31 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json') 00:05:45.924 18:20:31 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:45.924 18:20:31 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:45.924 18:20:31 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:45.924 18:20:31 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:45.924 INFO: JSON configuration test init 00:05:45.924 18:20:31 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:45.924 18:20:31 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:45.924 18:20:31 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:45.924 18:20:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.924 18:20:31 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:45.924 18:20:31 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:45.924 18:20:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.924 18:20:31 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:45.924 18:20:31 json_config -- json_config/common.sh@9 -- # local app=target 00:05:45.924 18:20:31 json_config -- json_config/common.sh@10 -- # shift 00:05:45.924 18:20:31 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:45.924 18:20:31 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:45.924 18:20:31 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:45.924 18:20:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:45.924 18:20:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:45.924 18:20:31 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2710777 00:05:45.924 18:20:31 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:45.924 Waiting for target to run... 00:05:45.924 18:20:31 json_config -- json_config/common.sh@25 -- # waitforlisten 2710777 /var/tmp/spdk_tgt.sock 00:05:45.924 18:20:31 json_config -- common/autotest_common.sh@829 -- # '[' -z 2710777 ']' 00:05:45.924 18:20:31 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:45.924 18:20:31 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:45.924 18:20:31 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.924 18:20:31 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:45.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:45.924 18:20:31 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.924 18:20:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.924 [2024-07-15 18:20:31.393464] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:45.924 [2024-07-15 18:20:31.393532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2710777 ] 00:05:46.491 [2024-07-15 18:20:31.865297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.491 [2024-07-15 18:20:31.964252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.059 18:20:32 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.059 18:20:32 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:47.059 18:20:32 json_config -- json_config/common.sh@26 -- # echo '' 00:05:47.059 00:05:47.059 18:20:32 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:47.059 18:20:32 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:47.059 18:20:32 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:47.059 18:20:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.059 18:20:32 json_config -- json_config/json_config.sh@95 -- # [[ 1 -eq 1 ]] 00:05:47.059 18:20:32 json_config -- json_config/json_config.sh@96 -- # tgt_rpc dpdk_cryptodev_scan_accel_module 00:05:47.059 18:20:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock dpdk_cryptodev_scan_accel_module 00:05:47.059 18:20:32 json_config -- json_config/json_config.sh@97 -- # tgt_rpc accel_assign_opc -o encrypt -m dpdk_cryptodev 00:05:47.059 18:20:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o encrypt -m dpdk_cryptodev 00:05:47.318 [2024-07-15 18:20:32.842881] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:05:47.318 18:20:32 json_config -- json_config/json_config.sh@98 -- # tgt_rpc accel_assign_opc -o decrypt -m dpdk_cryptodev 00:05:47.318 18:20:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o decrypt -m dpdk_cryptodev 00:05:47.576 [2024-07-15 18:20:33.099546] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:05:47.576 18:20:33 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:47.576 18:20:33 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:47.576 18:20:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.843 18:20:33 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:47.843 18:20:33 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:47.843 18:20:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:48.104 [2024-07-15 18:20:33.413375] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:05:53.376 18:20:38 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:53.376 18:20:38 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:53.376 18:20:38 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:53.376 18:20:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.376 18:20:38 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:53.376 18:20:38 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:53.376 18:20:38 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:53.376 18:20:38 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:53.376 18:20:38 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:53.376 18:20:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:53.376 18:20:38 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:53.376 18:20:38 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:53.376 18:20:38 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:53.376 18:20:38 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:53.376 18:20:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:53.376 18:20:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.376 18:20:38 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:53.376 18:20:38 json_config -- json_config/json_config.sh@278 -- # [[ 1 -eq 1 ]] 00:05:53.376 18:20:38 json_config -- json_config/json_config.sh@279 -- # create_bdev_subsystem_config 00:05:53.376 18:20:38 json_config -- json_config/json_config.sh@105 -- # timing_enter create_bdev_subsystem_config 00:05:53.376 18:20:38 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:53.376 18:20:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.376 18:20:38 json_config -- json_config/json_config.sh@107 -- # expected_notifications=() 00:05:53.376 18:20:38 json_config -- json_config/json_config.sh@107 -- # local expected_notifications 00:05:53.376 18:20:38 json_config -- json_config/json_config.sh@111 -- # expected_notifications+=($(get_notifications)) 00:05:53.376 18:20:38 json_config -- json_config/json_config.sh@111 -- # get_notifications 00:05:53.376 18:20:38 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:05:53.376 18:20:38 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:05:53.377 18:20:38 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:05:53.377 18:20:38 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:05:53.377 18:20:38 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:05:53.377 18:20:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:05:53.636 18:20:39 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:05:53.636 18:20:39 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:05:53.636 18:20:39 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:05:53.636 18:20:39 json_config -- json_config/json_config.sh@113 -- # [[ 1 -eq 1 ]] 00:05:53.636 18:20:39 json_config -- json_config/json_config.sh@114 -- # local lvol_store_base_bdev=Nvme0n1 00:05:53.636 18:20:39 json_config -- json_config/json_config.sh@116 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:05:53.636 18:20:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:05:53.895 Nvme0n1p0 Nvme0n1p1 00:05:53.895 18:20:39 json_config -- json_config/json_config.sh@117 -- # tgt_rpc bdev_split_create Malloc0 3 00:05:53.895 18:20:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:05:54.153 [2024-07-15 18:20:39.502030] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:05:54.153 [2024-07-15 18:20:39.502084] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:05:54.153 00:05:54.153 18:20:39 json_config -- json_config/json_config.sh@118 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:05:54.153 18:20:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:05:54.412 Malloc3 00:05:54.412 18:20:39 json_config -- json_config/json_config.sh@119 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:05:54.412 18:20:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:05:54.670 [2024-07-15 18:20:40.007508] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:54.670 [2024-07-15 18:20:40.007559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:54.670 [2024-07-15 18:20:40.007582] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25b23b0 00:05:54.670 [2024-07-15 18:20:40.007592] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:54.670 [2024-07-15 18:20:40.009482] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:54.670 [2024-07-15 18:20:40.009511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:05:54.670 PTBdevFromMalloc3 00:05:54.670 18:20:40 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_null_create Null0 32 512 00:05:54.670 18:20:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:05:54.929 Null0 00:05:54.929 18:20:40 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:05:54.929 18:20:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:05:55.188 Malloc0 00:05:55.188 18:20:40 json_config -- json_config/json_config.sh@124 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:05:55.188 18:20:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:05:55.756 Malloc1 00:05:55.756 18:20:41 json_config -- json_config/json_config.sh@137 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:05:55.756 18:20:41 json_config -- json_config/json_config.sh@140 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:05:55.756 102400+0 records in 00:05:55.756 102400+0 records out 00:05:55.756 104857600 bytes (105 MB, 100 MiB) copied, 0.156819 s, 669 MB/s 00:05:55.756 18:20:41 json_config -- json_config/json_config.sh@141 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:05:55.756 18:20:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:05:56.325 aio_disk 00:05:56.325 18:20:41 json_config -- json_config/json_config.sh@142 -- # expected_notifications+=(bdev_register:aio_disk) 00:05:56.325 18:20:41 json_config -- json_config/json_config.sh@147 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:05:56.325 18:20:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:05:59.614 7a723c16-f7ee-42f6-8c19-15cef47b200e 00:05:59.614 18:20:44 json_config -- json_config/json_config.sh@154 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:05:59.614 18:20:44 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:05:59.614 18:20:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:05:59.614 18:20:44 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:05:59.614 18:20:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:05:59.614 18:20:44 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:05:59.614 18:20:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:05:59.873 18:20:45 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:05:59.873 18:20:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:06:00.132 18:20:45 json_config -- json_config/json_config.sh@157 -- # [[ 1 -eq 1 ]] 00:06:00.132 18:20:45 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:06:00.132 18:20:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:06:00.391 MallocForCryptoBdev 00:06:00.391 18:20:45 json_config -- json_config/json_config.sh@159 -- # lspci -d:37c8 00:06:00.392 18:20:45 json_config -- json_config/json_config.sh@159 -- # wc -l 00:06:00.392 18:20:45 json_config -- json_config/json_config.sh@159 -- # [[ 3 -eq 0 ]] 00:06:00.392 18:20:45 json_config -- json_config/json_config.sh@162 -- # local crypto_driver=crypto_qat 00:06:00.392 18:20:45 json_config -- json_config/json_config.sh@165 -- # tgt_rpc bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:06:00.392 18:20:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:06:00.651 [2024-07-15 18:20:46.003260] vbdev_crypto_rpc.c: 136:rpc_bdev_crypto_create: *WARNING*: "crypto_pmd" parameters is obsolete and ignored 00:06:00.651 CryptoMallocBdev 00:06:00.651 18:20:46 json_config -- json_config/json_config.sh@169 -- # expected_notifications+=(bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev) 00:06:00.651 18:20:46 json_config -- json_config/json_config.sh@172 -- # [[ 0 -eq 1 ]] 00:06:00.651 18:20:46 json_config -- json_config/json_config.sh@178 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:53a17867-c6ac-497d-90c1-68ce0407890a bdev_register:950d6188-bf70-4317-9d38-91eeeb280623 bdev_register:62fc1656-922f-4a99-b6e8-36e2b4ea93c4 bdev_register:c6b36d90-c4ee-4d43-adce-b20cd7902afc bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:06:00.651 18:20:46 json_config -- json_config/json_config.sh@67 -- # local events_to_check 00:06:00.651 18:20:46 json_config -- json_config/json_config.sh@68 -- # local recorded_events 00:06:00.651 18:20:46 json_config -- json_config/json_config.sh@71 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:06:00.651 18:20:46 json_config -- json_config/json_config.sh@71 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:53a17867-c6ac-497d-90c1-68ce0407890a bdev_register:950d6188-bf70-4317-9d38-91eeeb280623 bdev_register:62fc1656-922f-4a99-b6e8-36e2b4ea93c4 bdev_register:c6b36d90-c4ee-4d43-adce-b20cd7902afc bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:06:00.651 18:20:46 json_config -- json_config/json_config.sh@71 -- # sort 00:06:00.651 18:20:46 json_config -- json_config/json_config.sh@72 -- # recorded_events=($(get_notifications | sort)) 00:06:00.651 18:20:46 json_config -- json_config/json_config.sh@72 -- # get_notifications 00:06:00.651 18:20:46 json_config -- json_config/json_config.sh@72 -- # sort 00:06:00.651 18:20:46 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:06:00.651 18:20:46 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.651 18:20:46 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.651 18:20:46 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:06:00.651 18:20:46 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:06:00.651 18:20:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p1 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p0 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc3 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:PTBdevFromMalloc3 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Null0 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p2 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p1 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p0 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc1 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:aio_disk 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:53a17867-c6ac-497d-90c1-68ce0407890a 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:950d6188-bf70-4317-9d38-91eeeb280623 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:62fc1656-922f-4a99-b6e8-36e2b4ea93c4 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:c6b36d90-c4ee-4d43-adce-b20cd7902afc 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:MallocForCryptoBdev 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:CryptoMallocBdev 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@74 -- # [[ bdev_register:53a17867-c6ac-497d-90c1-68ce0407890a bdev_register:62fc1656-922f-4a99-b6e8-36e2b4ea93c4 bdev_register:950d6188-bf70-4317-9d38-91eeeb280623 bdev_register:aio_disk bdev_register:c6b36d90-c4ee-4d43-adce-b20cd7902afc bdev_register:CryptoMallocBdev bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\5\3\a\1\7\8\6\7\-\c\6\a\c\-\4\9\7\d\-\9\0\c\1\-\6\8\c\e\0\4\0\7\8\9\0\a\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\6\2\f\c\1\6\5\6\-\9\2\2\f\-\4\a\9\9\-\b\6\e\8\-\3\6\e\2\b\4\e\a\9\3\c\4\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\9\5\0\d\6\1\8\8\-\b\f\7\0\-\4\3\1\7\-\9\d\3\8\-\9\1\e\e\e\b\2\8\0\6\2\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\c\6\b\3\6\d\9\0\-\c\4\e\e\-\4\d\4\3\-\a\d\c\e\-\b\2\0\c\d\7\9\0\2\a\f\c\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\C\r\y\p\t\o\M\a\l\l\o\c\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\F\o\r\C\r\y\p\t\o\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3 ]] 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@86 -- # cat 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@86 -- # printf ' %s\n' bdev_register:53a17867-c6ac-497d-90c1-68ce0407890a bdev_register:62fc1656-922f-4a99-b6e8-36e2b4ea93c4 bdev_register:950d6188-bf70-4317-9d38-91eeeb280623 bdev_register:aio_disk bdev_register:c6b36d90-c4ee-4d43-adce-b20cd7902afc bdev_register:CryptoMallocBdev bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 00:06:00.910 Expected events matched: 00:06:00.910 bdev_register:53a17867-c6ac-497d-90c1-68ce0407890a 00:06:00.910 bdev_register:62fc1656-922f-4a99-b6e8-36e2b4ea93c4 00:06:00.910 bdev_register:950d6188-bf70-4317-9d38-91eeeb280623 00:06:00.910 bdev_register:aio_disk 00:06:00.910 bdev_register:c6b36d90-c4ee-4d43-adce-b20cd7902afc 00:06:00.910 bdev_register:CryptoMallocBdev 00:06:00.910 bdev_register:Malloc0 00:06:00.910 bdev_register:Malloc0p0 00:06:00.910 bdev_register:Malloc0p1 00:06:00.910 bdev_register:Malloc0p2 00:06:00.910 bdev_register:Malloc1 00:06:00.910 bdev_register:Malloc3 00:06:00.910 bdev_register:MallocForCryptoBdev 00:06:00.910 bdev_register:Null0 00:06:00.910 bdev_register:Nvme0n1 00:06:00.910 bdev_register:Nvme0n1p0 00:06:00.910 bdev_register:Nvme0n1p1 00:06:00.910 bdev_register:PTBdevFromMalloc3 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@180 -- # timing_exit create_bdev_subsystem_config 00:06:00.910 18:20:46 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:00.910 18:20:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:00.910 18:20:46 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:00.910 18:20:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:00.910 18:20:46 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:00.910 18:20:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:01.171 MallocBdevForConfigChangeCheck 00:06:01.171 18:20:46 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:01.171 18:20:46 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:01.171 18:20:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.171 18:20:46 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:01.171 18:20:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:01.512 18:20:47 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:01.512 INFO: shutting down applications... 00:06:01.512 18:20:47 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:01.512 18:20:47 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:01.512 18:20:47 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:01.512 18:20:47 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:01.770 [2024-07-15 18:20:47.263408] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:06:03.671 Calling clear_iscsi_subsystem 00:06:03.671 Calling clear_nvmf_subsystem 00:06:03.671 Calling clear_nbd_subsystem 00:06:03.671 Calling clear_ublk_subsystem 00:06:03.671 Calling clear_vhost_blk_subsystem 00:06:03.671 Calling clear_vhost_scsi_subsystem 00:06:03.671 Calling clear_bdev_subsystem 00:06:03.671 18:20:48 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py 00:06:03.671 18:20:48 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:03.671 18:20:48 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:03.671 18:20:48 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:03.671 18:20:48 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:03.671 18:20:48 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:03.671 18:20:49 json_config -- json_config/json_config.sh@345 -- # break 00:06:03.671 18:20:49 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:03.671 18:20:49 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:03.671 18:20:49 json_config -- json_config/common.sh@31 -- # local app=target 00:06:03.671 18:20:49 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:03.671 18:20:49 json_config -- json_config/common.sh@35 -- # [[ -n 2710777 ]] 00:06:03.671 18:20:49 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2710777 00:06:03.671 18:20:49 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:03.671 18:20:49 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:03.671 18:20:49 json_config -- json_config/common.sh@41 -- # kill -0 2710777 00:06:03.671 18:20:49 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:04.238 18:20:49 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:04.238 18:20:49 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:04.238 18:20:49 json_config -- json_config/common.sh@41 -- # kill -0 2710777 00:06:04.238 18:20:49 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:04.238 18:20:49 json_config -- json_config/common.sh@43 -- # break 00:06:04.238 18:20:49 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:04.238 18:20:49 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:04.238 SPDK target shutdown done 00:06:04.238 18:20:49 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:04.238 INFO: relaunching applications... 00:06:04.238 18:20:49 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:04.238 18:20:49 json_config -- json_config/common.sh@9 -- # local app=target 00:06:04.238 18:20:49 json_config -- json_config/common.sh@10 -- # shift 00:06:04.238 18:20:49 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:04.238 18:20:49 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:04.238 18:20:49 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:04.238 18:20:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:04.238 18:20:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:04.238 18:20:49 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2713854 00:06:04.238 18:20:49 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:04.238 Waiting for target to run... 00:06:04.238 18:20:49 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:04.238 18:20:49 json_config -- json_config/common.sh@25 -- # waitforlisten 2713854 /var/tmp/spdk_tgt.sock 00:06:04.238 18:20:49 json_config -- common/autotest_common.sh@829 -- # '[' -z 2713854 ']' 00:06:04.238 18:20:49 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:04.238 18:20:49 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.238 18:20:49 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:04.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:04.238 18:20:49 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.238 18:20:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.238 [2024-07-15 18:20:49.754250] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:04.238 [2024-07-15 18:20:49.754315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2713854 ] 00:06:04.804 [2024-07-15 18:20:50.087752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.804 [2024-07-15 18:20:50.172589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.804 [2024-07-15 18:20:50.226885] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:06:04.804 [2024-07-15 18:20:50.234921] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:06:04.804 [2024-07-15 18:20:50.242939] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:06:04.804 [2024-07-15 18:20:50.324488] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:06:07.336 [2024-07-15 18:20:52.474328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:07.336 [2024-07-15 18:20:52.474377] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:06:07.336 [2024-07-15 18:20:52.474389] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:07.336 [2024-07-15 18:20:52.482345] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:06:07.336 [2024-07-15 18:20:52.482371] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:06:07.336 [2024-07-15 18:20:52.490361] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:07.336 [2024-07-15 18:20:52.490384] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:07.336 [2024-07-15 18:20:52.498409] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "CryptoMallocBdev_AES_CBC" 00:06:07.336 [2024-07-15 18:20:52.498436] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: MallocForCryptoBdev 00:06:07.336 [2024-07-15 18:20:52.498445] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:09.868 [2024-07-15 18:20:55.371184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:09.868 [2024-07-15 18:20:55.371228] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:09.868 [2024-07-15 18:20:55.371244] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc4bf60 00:06:09.868 [2024-07-15 18:20:55.371253] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:09.868 [2024-07-15 18:20:55.371547] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:09.868 [2024-07-15 18:20:55.371564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:06:10.435 18:20:55 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.435 18:20:55 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:10.435 18:20:55 json_config -- json_config/common.sh@26 -- # echo '' 00:06:10.435 00:06:10.435 18:20:55 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:10.435 18:20:55 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:10.435 INFO: Checking if target configuration is the same... 00:06:10.435 18:20:55 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:10.693 18:20:55 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:10.693 18:20:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:10.693 + '[' 2 -ne 2 ']' 00:06:10.693 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:10.693 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:06:10.693 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:06:10.693 +++ basename /dev/fd/62 00:06:10.693 ++ mktemp /tmp/62.XXX 00:06:10.693 + tmp_file_1=/tmp/62.jaa 00:06:10.693 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:10.693 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:10.693 + tmp_file_2=/tmp/spdk_tgt_config.json.Nee 00:06:10.693 + ret=0 00:06:10.693 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:10.952 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:10.952 + diff -u /tmp/62.jaa /tmp/spdk_tgt_config.json.Nee 00:06:10.952 + echo 'INFO: JSON config files are the same' 00:06:10.952 INFO: JSON config files are the same 00:06:10.952 + rm /tmp/62.jaa /tmp/spdk_tgt_config.json.Nee 00:06:10.952 + exit 0 00:06:10.952 18:20:56 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:10.952 18:20:56 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:10.952 INFO: changing configuration and checking if this can be detected... 00:06:10.952 18:20:56 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:10.952 18:20:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:11.211 18:20:56 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:11.211 18:20:56 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:11.211 18:20:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:11.211 + '[' 2 -ne 2 ']' 00:06:11.211 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:11.211 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:06:11.211 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:06:11.211 +++ basename /dev/fd/62 00:06:11.211 ++ mktemp /tmp/62.XXX 00:06:11.211 + tmp_file_1=/tmp/62.pZY 00:06:11.211 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:11.211 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:11.211 + tmp_file_2=/tmp/spdk_tgt_config.json.DNC 00:06:11.211 + ret=0 00:06:11.211 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:11.793 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:11.793 + diff -u /tmp/62.pZY /tmp/spdk_tgt_config.json.DNC 00:06:11.793 + ret=1 00:06:11.793 + echo '=== Start of file: /tmp/62.pZY ===' 00:06:11.793 + cat /tmp/62.pZY 00:06:11.793 + echo '=== End of file: /tmp/62.pZY ===' 00:06:11.793 + echo '' 00:06:11.793 + echo '=== Start of file: /tmp/spdk_tgt_config.json.DNC ===' 00:06:11.793 + cat /tmp/spdk_tgt_config.json.DNC 00:06:11.793 + echo '=== End of file: /tmp/spdk_tgt_config.json.DNC ===' 00:06:11.793 + echo '' 00:06:11.793 + rm /tmp/62.pZY /tmp/spdk_tgt_config.json.DNC 00:06:11.793 + exit 1 00:06:11.793 18:20:57 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:11.793 INFO: configuration change detected. 00:06:11.793 18:20:57 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:11.793 18:20:57 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:11.793 18:20:57 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:11.793 18:20:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.793 18:20:57 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:11.793 18:20:57 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:11.793 18:20:57 json_config -- json_config/json_config.sh@317 -- # [[ -n 2713854 ]] 00:06:11.793 18:20:57 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:11.793 18:20:57 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:11.793 18:20:57 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:11.793 18:20:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.793 18:20:57 json_config -- json_config/json_config.sh@186 -- # [[ 1 -eq 1 ]] 00:06:11.793 18:20:57 json_config -- json_config/json_config.sh@187 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:06:11.793 18:20:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:06:12.052 18:20:57 json_config -- json_config/json_config.sh@188 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:06:12.052 18:20:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:06:12.311 18:20:57 json_config -- json_config/json_config.sh@189 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:06:12.311 18:20:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:06:12.570 18:20:57 json_config -- json_config/json_config.sh@190 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:06:12.570 18:20:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:06:12.830 18:20:58 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:12.830 18:20:58 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:12.830 18:20:58 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:12.830 18:20:58 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:12.830 18:20:58 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:12.830 18:20:58 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:12.830 18:20:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.830 18:20:58 json_config -- json_config/json_config.sh@323 -- # killprocess 2713854 00:06:12.830 18:20:58 json_config -- common/autotest_common.sh@948 -- # '[' -z 2713854 ']' 00:06:12.830 18:20:58 json_config -- common/autotest_common.sh@952 -- # kill -0 2713854 00:06:12.830 18:20:58 json_config -- common/autotest_common.sh@953 -- # uname 00:06:12.830 18:20:58 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.830 18:20:58 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2713854 00:06:12.830 18:20:58 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.830 18:20:58 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.830 18:20:58 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2713854' 00:06:12.830 killing process with pid 2713854 00:06:12.830 18:20:58 json_config -- common/autotest_common.sh@967 -- # kill 2713854 00:06:12.830 18:20:58 json_config -- common/autotest_common.sh@972 -- # wait 2713854 00:06:14.734 18:21:00 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:14.734 18:21:00 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:14.734 18:21:00 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:14.734 18:21:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.734 18:21:00 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:14.734 18:21:00 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:14.734 INFO: Success 00:06:14.734 00:06:14.734 real 0m28.858s 00:06:14.734 user 0m35.895s 00:06:14.734 sys 0m3.081s 00:06:14.734 18:21:00 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.734 18:21:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.734 ************************************ 00:06:14.734 END TEST json_config 00:06:14.734 ************************************ 00:06:14.734 18:21:00 -- common/autotest_common.sh@1142 -- # return 0 00:06:14.734 18:21:00 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:14.734 18:21:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.734 18:21:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.734 18:21:00 -- common/autotest_common.sh@10 -- # set +x 00:06:14.734 ************************************ 00:06:14.734 START TEST json_config_extra_key 00:06:14.734 ************************************ 00:06:14.734 18:21:00 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:14.734 18:21:00 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:06:14.734 18:21:00 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:14.734 18:21:00 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:14.734 18:21:00 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:14.734 18:21:00 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:14.734 18:21:00 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:14.734 18:21:00 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:14.734 18:21:00 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:14.734 18:21:00 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:14.734 18:21:00 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:14.734 18:21:00 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:14.734 18:21:00 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:14.734 18:21:00 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b98b40-9a1d-eb11-906e-0017a4403562 00:06:14.734 18:21:00 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80b98b40-9a1d-eb11-906e-0017a4403562 00:06:14.734 18:21:00 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:14.734 18:21:00 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:14.734 18:21:00 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:14.734 18:21:00 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:14.734 18:21:00 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:06:14.734 18:21:00 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:14.734 18:21:00 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:14.734 18:21:00 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:14.734 18:21:00 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.734 18:21:00 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.734 18:21:00 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.734 18:21:00 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:14.734 18:21:00 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.734 18:21:00 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:14.734 18:21:00 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:14.734 18:21:00 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:14.734 18:21:00 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:14.735 18:21:00 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:14.735 18:21:00 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:14.735 18:21:00 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:14.735 18:21:00 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:14.735 18:21:00 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:14.735 18:21:00 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:06:14.735 18:21:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:14.735 18:21:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:14.735 18:21:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:14.735 18:21:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:14.735 18:21:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:14.735 18:21:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:14.735 18:21:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:14.735 18:21:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:14.735 18:21:00 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:14.735 18:21:00 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:14.735 INFO: launching applications... 00:06:14.735 18:21:00 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:06:14.735 18:21:00 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:14.735 18:21:00 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:14.735 18:21:00 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:14.735 18:21:00 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:14.735 18:21:00 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:14.735 18:21:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:14.735 18:21:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:14.735 18:21:00 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2715726 00:06:14.735 18:21:00 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:14.735 Waiting for target to run... 00:06:14.735 18:21:00 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2715726 /var/tmp/spdk_tgt.sock 00:06:14.735 18:21:00 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 2715726 ']' 00:06:14.735 18:21:00 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:06:14.735 18:21:00 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:14.735 18:21:00 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.735 18:21:00 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:14.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:14.735 18:21:00 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.735 18:21:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:14.994 [2024-07-15 18:21:00.307473] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:14.994 [2024-07-15 18:21:00.307539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2715726 ] 00:06:15.254 [2024-07-15 18:21:00.802570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.512 [2024-07-15 18:21:00.891945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.770 18:21:01 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.770 18:21:01 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:15.770 18:21:01 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:15.770 00:06:15.770 18:21:01 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:15.770 INFO: shutting down applications... 00:06:15.770 18:21:01 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:15.770 18:21:01 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:15.770 18:21:01 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:15.770 18:21:01 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2715726 ]] 00:06:15.770 18:21:01 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2715726 00:06:15.770 18:21:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:15.770 18:21:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:15.770 18:21:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2715726 00:06:15.770 18:21:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:16.337 18:21:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:16.337 18:21:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:16.337 18:21:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2715726 00:06:16.337 18:21:01 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:16.337 18:21:01 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:16.337 18:21:01 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:16.337 18:21:01 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:16.337 SPDK target shutdown done 00:06:16.337 18:21:01 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:16.337 Success 00:06:16.337 00:06:16.337 real 0m1.614s 00:06:16.337 user 0m1.201s 00:06:16.337 sys 0m0.605s 00:06:16.337 18:21:01 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.337 18:21:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:16.337 ************************************ 00:06:16.337 END TEST json_config_extra_key 00:06:16.337 ************************************ 00:06:16.337 18:21:01 -- common/autotest_common.sh@1142 -- # return 0 00:06:16.337 18:21:01 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:16.337 18:21:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.337 18:21:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.337 18:21:01 -- common/autotest_common.sh@10 -- # set +x 00:06:16.337 ************************************ 00:06:16.337 START TEST alias_rpc 00:06:16.337 ************************************ 00:06:16.337 18:21:01 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:16.595 * Looking for test storage... 00:06:16.595 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc 00:06:16.595 18:21:01 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:16.595 18:21:01 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2716083 00:06:16.595 18:21:01 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2716083 00:06:16.595 18:21:01 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:16.595 18:21:01 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 2716083 ']' 00:06:16.595 18:21:01 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.595 18:21:01 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.595 18:21:01 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.595 18:21:01 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.595 18:21:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.595 [2024-07-15 18:21:01.989183] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:16.595 [2024-07-15 18:21:01.989247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2716083 ] 00:06:16.595 [2024-07-15 18:21:02.080976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.853 [2024-07-15 18:21:02.177210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.419 18:21:02 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.419 18:21:02 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:17.419 18:21:02 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:17.677 18:21:03 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2716083 00:06:17.677 18:21:03 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 2716083 ']' 00:06:17.677 18:21:03 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 2716083 00:06:17.677 18:21:03 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:17.677 18:21:03 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.677 18:21:03 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2716083 00:06:17.935 18:21:03 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:17.935 18:21:03 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:17.935 18:21:03 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2716083' 00:06:17.935 killing process with pid 2716083 00:06:17.935 18:21:03 alias_rpc -- common/autotest_common.sh@967 -- # kill 2716083 00:06:17.935 18:21:03 alias_rpc -- common/autotest_common.sh@972 -- # wait 2716083 00:06:18.194 00:06:18.194 real 0m1.776s 00:06:18.194 user 0m2.092s 00:06:18.194 sys 0m0.465s 00:06:18.194 18:21:03 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.194 18:21:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.194 ************************************ 00:06:18.194 END TEST alias_rpc 00:06:18.194 ************************************ 00:06:18.194 18:21:03 -- common/autotest_common.sh@1142 -- # return 0 00:06:18.194 18:21:03 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:18.194 18:21:03 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:18.194 18:21:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.194 18:21:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.194 18:21:03 -- common/autotest_common.sh@10 -- # set +x 00:06:18.194 ************************************ 00:06:18.194 START TEST spdkcli_tcp 00:06:18.194 ************************************ 00:06:18.194 18:21:03 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:18.452 * Looking for test storage... 00:06:18.452 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli 00:06:18.452 18:21:03 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/common.sh 00:06:18.452 18:21:03 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:18.452 18:21:03 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py 00:06:18.452 18:21:03 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:18.452 18:21:03 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:18.452 18:21:03 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:18.452 18:21:03 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:18.452 18:21:03 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:18.452 18:21:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:18.452 18:21:03 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2716384 00:06:18.452 18:21:03 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2716384 00:06:18.452 18:21:03 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:18.452 18:21:03 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 2716384 ']' 00:06:18.452 18:21:03 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.452 18:21:03 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.452 18:21:03 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.452 18:21:03 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.452 18:21:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:18.452 [2024-07-15 18:21:03.837973] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:18.452 [2024-07-15 18:21:03.838039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2716384 ] 00:06:18.452 [2024-07-15 18:21:03.940034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.711 [2024-07-15 18:21:04.038164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.711 [2024-07-15 18:21:04.038171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.277 18:21:04 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.277 18:21:04 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:19.277 18:21:04 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2716596 00:06:19.277 18:21:04 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:19.277 18:21:04 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:19.535 [ 00:06:19.535 "bdev_malloc_delete", 00:06:19.535 "bdev_malloc_create", 00:06:19.535 "bdev_null_resize", 00:06:19.535 "bdev_null_delete", 00:06:19.535 "bdev_null_create", 00:06:19.535 "bdev_nvme_cuse_unregister", 00:06:19.535 "bdev_nvme_cuse_register", 00:06:19.535 "bdev_opal_new_user", 00:06:19.535 "bdev_opal_set_lock_state", 00:06:19.535 "bdev_opal_delete", 00:06:19.535 "bdev_opal_get_info", 00:06:19.535 "bdev_opal_create", 00:06:19.535 "bdev_nvme_opal_revert", 00:06:19.535 "bdev_nvme_opal_init", 00:06:19.535 "bdev_nvme_send_cmd", 00:06:19.535 "bdev_nvme_get_path_iostat", 00:06:19.535 "bdev_nvme_get_mdns_discovery_info", 00:06:19.535 "bdev_nvme_stop_mdns_discovery", 00:06:19.535 "bdev_nvme_start_mdns_discovery", 00:06:19.535 "bdev_nvme_set_multipath_policy", 00:06:19.535 "bdev_nvme_set_preferred_path", 00:06:19.535 "bdev_nvme_get_io_paths", 00:06:19.535 "bdev_nvme_remove_error_injection", 00:06:19.535 "bdev_nvme_add_error_injection", 00:06:19.535 "bdev_nvme_get_discovery_info", 00:06:19.535 "bdev_nvme_stop_discovery", 00:06:19.535 "bdev_nvme_start_discovery", 00:06:19.535 "bdev_nvme_get_controller_health_info", 00:06:19.535 "bdev_nvme_disable_controller", 00:06:19.535 "bdev_nvme_enable_controller", 00:06:19.535 "bdev_nvme_reset_controller", 00:06:19.535 "bdev_nvme_get_transport_statistics", 00:06:19.535 "bdev_nvme_apply_firmware", 00:06:19.535 "bdev_nvme_detach_controller", 00:06:19.535 "bdev_nvme_get_controllers", 00:06:19.535 "bdev_nvme_attach_controller", 00:06:19.535 "bdev_nvme_set_hotplug", 00:06:19.535 "bdev_nvme_set_options", 00:06:19.535 "bdev_passthru_delete", 00:06:19.535 "bdev_passthru_create", 00:06:19.535 "bdev_lvol_set_parent_bdev", 00:06:19.535 "bdev_lvol_set_parent", 00:06:19.535 "bdev_lvol_check_shallow_copy", 00:06:19.535 "bdev_lvol_start_shallow_copy", 00:06:19.535 "bdev_lvol_grow_lvstore", 00:06:19.535 "bdev_lvol_get_lvols", 00:06:19.535 "bdev_lvol_get_lvstores", 00:06:19.535 "bdev_lvol_delete", 00:06:19.535 "bdev_lvol_set_read_only", 00:06:19.535 "bdev_lvol_resize", 00:06:19.535 "bdev_lvol_decouple_parent", 00:06:19.535 "bdev_lvol_inflate", 00:06:19.535 "bdev_lvol_rename", 00:06:19.535 "bdev_lvol_clone_bdev", 00:06:19.535 "bdev_lvol_clone", 00:06:19.535 "bdev_lvol_snapshot", 00:06:19.535 "bdev_lvol_create", 00:06:19.535 "bdev_lvol_delete_lvstore", 00:06:19.535 "bdev_lvol_rename_lvstore", 00:06:19.535 "bdev_lvol_create_lvstore", 00:06:19.535 "bdev_raid_set_options", 00:06:19.535 "bdev_raid_remove_base_bdev", 00:06:19.535 "bdev_raid_add_base_bdev", 00:06:19.535 "bdev_raid_delete", 00:06:19.535 "bdev_raid_create", 00:06:19.535 "bdev_raid_get_bdevs", 00:06:19.535 "bdev_error_inject_error", 00:06:19.535 "bdev_error_delete", 00:06:19.535 "bdev_error_create", 00:06:19.535 "bdev_split_delete", 00:06:19.535 "bdev_split_create", 00:06:19.536 "bdev_delay_delete", 00:06:19.536 "bdev_delay_create", 00:06:19.536 "bdev_delay_update_latency", 00:06:19.536 "bdev_zone_block_delete", 00:06:19.536 "bdev_zone_block_create", 00:06:19.536 "blobfs_create", 00:06:19.536 "blobfs_detect", 00:06:19.536 "blobfs_set_cache_size", 00:06:19.536 "bdev_crypto_delete", 00:06:19.536 "bdev_crypto_create", 00:06:19.536 "bdev_compress_delete", 00:06:19.536 "bdev_compress_create", 00:06:19.536 "bdev_compress_get_orphans", 00:06:19.536 "bdev_aio_delete", 00:06:19.536 "bdev_aio_rescan", 00:06:19.536 "bdev_aio_create", 00:06:19.536 "bdev_ftl_set_property", 00:06:19.536 "bdev_ftl_get_properties", 00:06:19.536 "bdev_ftl_get_stats", 00:06:19.536 "bdev_ftl_unmap", 00:06:19.536 "bdev_ftl_unload", 00:06:19.536 "bdev_ftl_delete", 00:06:19.536 "bdev_ftl_load", 00:06:19.536 "bdev_ftl_create", 00:06:19.536 "bdev_virtio_attach_controller", 00:06:19.536 "bdev_virtio_scsi_get_devices", 00:06:19.536 "bdev_virtio_detach_controller", 00:06:19.536 "bdev_virtio_blk_set_hotplug", 00:06:19.536 "bdev_iscsi_delete", 00:06:19.536 "bdev_iscsi_create", 00:06:19.536 "bdev_iscsi_set_options", 00:06:19.536 "accel_error_inject_error", 00:06:19.536 "ioat_scan_accel_module", 00:06:19.536 "dsa_scan_accel_module", 00:06:19.536 "iaa_scan_accel_module", 00:06:19.536 "dpdk_cryptodev_get_driver", 00:06:19.536 "dpdk_cryptodev_set_driver", 00:06:19.536 "dpdk_cryptodev_scan_accel_module", 00:06:19.536 "compressdev_scan_accel_module", 00:06:19.536 "keyring_file_remove_key", 00:06:19.536 "keyring_file_add_key", 00:06:19.536 "keyring_linux_set_options", 00:06:19.536 "iscsi_get_histogram", 00:06:19.536 "iscsi_enable_histogram", 00:06:19.536 "iscsi_set_options", 00:06:19.536 "iscsi_get_auth_groups", 00:06:19.536 "iscsi_auth_group_remove_secret", 00:06:19.536 "iscsi_auth_group_add_secret", 00:06:19.536 "iscsi_delete_auth_group", 00:06:19.536 "iscsi_create_auth_group", 00:06:19.536 "iscsi_set_discovery_auth", 00:06:19.536 "iscsi_get_options", 00:06:19.536 "iscsi_target_node_request_logout", 00:06:19.536 "iscsi_target_node_set_redirect", 00:06:19.536 "iscsi_target_node_set_auth", 00:06:19.536 "iscsi_target_node_add_lun", 00:06:19.536 "iscsi_get_stats", 00:06:19.536 "iscsi_get_connections", 00:06:19.536 "iscsi_portal_group_set_auth", 00:06:19.536 "iscsi_start_portal_group", 00:06:19.536 "iscsi_delete_portal_group", 00:06:19.536 "iscsi_create_portal_group", 00:06:19.536 "iscsi_get_portal_groups", 00:06:19.536 "iscsi_delete_target_node", 00:06:19.536 "iscsi_target_node_remove_pg_ig_maps", 00:06:19.536 "iscsi_target_node_add_pg_ig_maps", 00:06:19.536 "iscsi_create_target_node", 00:06:19.536 "iscsi_get_target_nodes", 00:06:19.536 "iscsi_delete_initiator_group", 00:06:19.536 "iscsi_initiator_group_remove_initiators", 00:06:19.536 "iscsi_initiator_group_add_initiators", 00:06:19.536 "iscsi_create_initiator_group", 00:06:19.536 "iscsi_get_initiator_groups", 00:06:19.536 "nvmf_set_crdt", 00:06:19.536 "nvmf_set_config", 00:06:19.536 "nvmf_set_max_subsystems", 00:06:19.536 "nvmf_stop_mdns_prr", 00:06:19.536 "nvmf_publish_mdns_prr", 00:06:19.536 "nvmf_subsystem_get_listeners", 00:06:19.536 "nvmf_subsystem_get_qpairs", 00:06:19.536 "nvmf_subsystem_get_controllers", 00:06:19.536 "nvmf_get_stats", 00:06:19.536 "nvmf_get_transports", 00:06:19.536 "nvmf_create_transport", 00:06:19.536 "nvmf_get_targets", 00:06:19.536 "nvmf_delete_target", 00:06:19.536 "nvmf_create_target", 00:06:19.536 "nvmf_subsystem_allow_any_host", 00:06:19.536 "nvmf_subsystem_remove_host", 00:06:19.536 "nvmf_subsystem_add_host", 00:06:19.536 "nvmf_ns_remove_host", 00:06:19.536 "nvmf_ns_add_host", 00:06:19.536 "nvmf_subsystem_remove_ns", 00:06:19.536 "nvmf_subsystem_add_ns", 00:06:19.536 "nvmf_subsystem_listener_set_ana_state", 00:06:19.536 "nvmf_discovery_get_referrals", 00:06:19.536 "nvmf_discovery_remove_referral", 00:06:19.536 "nvmf_discovery_add_referral", 00:06:19.536 "nvmf_subsystem_remove_listener", 00:06:19.536 "nvmf_subsystem_add_listener", 00:06:19.536 "nvmf_delete_subsystem", 00:06:19.536 "nvmf_create_subsystem", 00:06:19.536 "nvmf_get_subsystems", 00:06:19.536 "env_dpdk_get_mem_stats", 00:06:19.536 "nbd_get_disks", 00:06:19.536 "nbd_stop_disk", 00:06:19.536 "nbd_start_disk", 00:06:19.536 "ublk_recover_disk", 00:06:19.536 "ublk_get_disks", 00:06:19.536 "ublk_stop_disk", 00:06:19.536 "ublk_start_disk", 00:06:19.536 "ublk_destroy_target", 00:06:19.536 "ublk_create_target", 00:06:19.536 "virtio_blk_create_transport", 00:06:19.536 "virtio_blk_get_transports", 00:06:19.536 "vhost_controller_set_coalescing", 00:06:19.536 "vhost_get_controllers", 00:06:19.536 "vhost_delete_controller", 00:06:19.536 "vhost_create_blk_controller", 00:06:19.536 "vhost_scsi_controller_remove_target", 00:06:19.536 "vhost_scsi_controller_add_target", 00:06:19.536 "vhost_start_scsi_controller", 00:06:19.536 "vhost_create_scsi_controller", 00:06:19.536 "thread_set_cpumask", 00:06:19.536 "framework_get_governor", 00:06:19.536 "framework_get_scheduler", 00:06:19.536 "framework_set_scheduler", 00:06:19.536 "framework_get_reactors", 00:06:19.536 "thread_get_io_channels", 00:06:19.536 "thread_get_pollers", 00:06:19.536 "thread_get_stats", 00:06:19.536 "framework_monitor_context_switch", 00:06:19.536 "spdk_kill_instance", 00:06:19.536 "log_enable_timestamps", 00:06:19.536 "log_get_flags", 00:06:19.536 "log_clear_flag", 00:06:19.536 "log_set_flag", 00:06:19.536 "log_get_level", 00:06:19.536 "log_set_level", 00:06:19.536 "log_get_print_level", 00:06:19.536 "log_set_print_level", 00:06:19.536 "framework_enable_cpumask_locks", 00:06:19.536 "framework_disable_cpumask_locks", 00:06:19.536 "framework_wait_init", 00:06:19.536 "framework_start_init", 00:06:19.536 "scsi_get_devices", 00:06:19.536 "bdev_get_histogram", 00:06:19.536 "bdev_enable_histogram", 00:06:19.536 "bdev_set_qos_limit", 00:06:19.536 "bdev_set_qd_sampling_period", 00:06:19.536 "bdev_get_bdevs", 00:06:19.536 "bdev_reset_iostat", 00:06:19.536 "bdev_get_iostat", 00:06:19.536 "bdev_examine", 00:06:19.536 "bdev_wait_for_examine", 00:06:19.536 "bdev_set_options", 00:06:19.536 "notify_get_notifications", 00:06:19.536 "notify_get_types", 00:06:19.536 "accel_get_stats", 00:06:19.536 "accel_set_options", 00:06:19.536 "accel_set_driver", 00:06:19.536 "accel_crypto_key_destroy", 00:06:19.536 "accel_crypto_keys_get", 00:06:19.536 "accel_crypto_key_create", 00:06:19.536 "accel_assign_opc", 00:06:19.536 "accel_get_module_info", 00:06:19.536 "accel_get_opc_assignments", 00:06:19.536 "vmd_rescan", 00:06:19.536 "vmd_remove_device", 00:06:19.536 "vmd_enable", 00:06:19.536 "sock_get_default_impl", 00:06:19.536 "sock_set_default_impl", 00:06:19.536 "sock_impl_set_options", 00:06:19.536 "sock_impl_get_options", 00:06:19.536 "iobuf_get_stats", 00:06:19.536 "iobuf_set_options", 00:06:19.536 "framework_get_pci_devices", 00:06:19.536 "framework_get_config", 00:06:19.536 "framework_get_subsystems", 00:06:19.536 "trace_get_info", 00:06:19.536 "trace_get_tpoint_group_mask", 00:06:19.536 "trace_disable_tpoint_group", 00:06:19.536 "trace_enable_tpoint_group", 00:06:19.536 "trace_clear_tpoint_mask", 00:06:19.536 "trace_set_tpoint_mask", 00:06:19.536 "keyring_get_keys", 00:06:19.536 "spdk_get_version", 00:06:19.536 "rpc_get_methods" 00:06:19.536 ] 00:06:19.536 18:21:05 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:19.536 18:21:05 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:19.536 18:21:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:19.536 18:21:05 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:19.536 18:21:05 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2716384 00:06:19.536 18:21:05 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 2716384 ']' 00:06:19.536 18:21:05 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 2716384 00:06:19.794 18:21:05 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:19.795 18:21:05 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:19.795 18:21:05 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2716384 00:06:19.795 18:21:05 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:19.795 18:21:05 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:19.795 18:21:05 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2716384' 00:06:19.795 killing process with pid 2716384 00:06:19.795 18:21:05 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 2716384 00:06:19.795 18:21:05 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 2716384 00:06:20.053 00:06:20.053 real 0m1.801s 00:06:20.053 user 0m3.464s 00:06:20.053 sys 0m0.496s 00:06:20.053 18:21:05 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.053 18:21:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:20.053 ************************************ 00:06:20.053 END TEST spdkcli_tcp 00:06:20.053 ************************************ 00:06:20.053 18:21:05 -- common/autotest_common.sh@1142 -- # return 0 00:06:20.053 18:21:05 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:20.053 18:21:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.053 18:21:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.053 18:21:05 -- common/autotest_common.sh@10 -- # set +x 00:06:20.053 ************************************ 00:06:20.053 START TEST dpdk_mem_utility 00:06:20.053 ************************************ 00:06:20.053 18:21:05 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:20.310 * Looking for test storage... 00:06:20.310 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility 00:06:20.310 18:21:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:20.310 18:21:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2716869 00:06:20.310 18:21:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:20.310 18:21:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2716869 00:06:20.310 18:21:05 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 2716869 ']' 00:06:20.310 18:21:05 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.310 18:21:05 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.310 18:21:05 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.310 18:21:05 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.310 18:21:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:20.310 [2024-07-15 18:21:05.698993] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:20.310 [2024-07-15 18:21:05.699055] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2716869 ] 00:06:20.310 [2024-07-15 18:21:05.800847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.568 [2024-07-15 18:21:05.896170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.133 18:21:06 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.133 18:21:06 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:21.133 18:21:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:21.133 18:21:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:21.133 18:21:06 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.133 18:21:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:21.133 { 00:06:21.133 "filename": "/tmp/spdk_mem_dump.txt" 00:06:21.133 } 00:06:21.133 18:21:06 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.133 18:21:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:21.423 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:21.423 1 heaps totaling size 814.000000 MiB 00:06:21.423 size: 814.000000 MiB heap id: 0 00:06:21.423 end heaps---------- 00:06:21.423 8 mempools totaling size 598.116089 MiB 00:06:21.423 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:21.423 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:21.423 size: 84.521057 MiB name: bdev_io_2716869 00:06:21.423 size: 51.011292 MiB name: evtpool_2716869 00:06:21.423 size: 50.003479 MiB name: msgpool_2716869 00:06:21.423 size: 21.763794 MiB name: PDU_Pool 00:06:21.424 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:21.424 size: 0.026123 MiB name: Session_Pool 00:06:21.424 end mempools------- 00:06:21.424 201 memzones totaling size 4.176453 MiB 00:06:21.424 size: 1.000366 MiB name: RG_ring_0_2716869 00:06:21.424 size: 1.000366 MiB name: RG_ring_1_2716869 00:06:21.424 size: 1.000366 MiB name: RG_ring_4_2716869 00:06:21.424 size: 1.000366 MiB name: RG_ring_5_2716869 00:06:21.424 size: 0.125366 MiB name: RG_ring_2_2716869 00:06:21.424 size: 0.015991 MiB name: RG_ring_3_2716869 00:06:21.424 size: 0.001160 MiB name: QAT_SYM_CAPA_GEN_1 00:06:21.424 size: 0.000305 MiB name: 0000:1a:01.0_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1a:01.1_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1a:01.2_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1a:01.3_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1a:01.4_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1a:01.5_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1a:01.6_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1a:01.7_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1a:02.0_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1a:02.1_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1a:02.2_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1a:02.3_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1a:02.4_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1a:02.5_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1a:02.6_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1a:02.7_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1c:01.0_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1c:01.1_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1c:01.2_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1c:01.3_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1c:01.4_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1c:01.5_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1c:01.6_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1c:01.7_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1c:02.0_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1c:02.1_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1c:02.2_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1c:02.3_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1c:02.4_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1c:02.5_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1c:02.6_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1c:02.7_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1e:01.0_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1e:01.1_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1e:01.2_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1e:01.3_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1e:01.4_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1e:01.5_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1e:01.6_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1e:01.7_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1e:02.0_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1e:02.1_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1e:02.2_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1e:02.3_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1e:02.4_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1e:02.5_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1e:02.6_qat 00:06:21.424 size: 0.000305 MiB name: 0000:1e:02.7_qat 00:06:21.424 size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_0 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_1 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_0 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_2 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_3 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_1 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_4 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_5 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_2 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_6 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_7 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_3 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_8 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_9 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_4 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_10 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_11 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_5 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_12 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_13 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_6 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_14 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_15 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_7 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_16 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_17 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_8 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_18 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_19 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_9 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_20 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_21 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_10 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_22 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_23 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_11 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_24 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_25 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_12 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_26 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_27 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_13 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_28 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_29 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_14 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_30 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_31 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_15 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_32 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_33 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_16 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_34 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_35 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_17 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_36 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_37 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_18 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_38 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_39 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_19 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_40 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_41 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_20 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_42 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_43 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_21 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_44 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_45 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_22 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_46 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_47 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_23 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_48 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_49 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_24 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_50 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_51 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_25 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_52 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_53 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_26 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_54 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_55 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_27 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_56 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_57 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_28 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_58 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_59 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_29 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_60 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_61 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_30 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_62 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_63 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_31 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_64 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_65 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_32 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_66 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_67 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_33 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_68 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_69 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_34 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_70 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_71 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_35 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_72 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_73 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_36 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_74 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_75 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_37 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_76 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_77 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_38 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_78 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_79 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_39 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_80 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_81 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_40 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_82 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_83 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_41 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_84 00:06:21.424 size: 0.000122 MiB name: rte_cryptodev_data_85 00:06:21.424 size: 0.000122 MiB name: rte_compressdev_data_42 00:06:21.425 size: 0.000122 MiB name: rte_cryptodev_data_86 00:06:21.425 size: 0.000122 MiB name: rte_cryptodev_data_87 00:06:21.425 size: 0.000122 MiB name: rte_compressdev_data_43 00:06:21.425 size: 0.000122 MiB name: rte_cryptodev_data_88 00:06:21.425 size: 0.000122 MiB name: rte_cryptodev_data_89 00:06:21.425 size: 0.000122 MiB name: rte_compressdev_data_44 00:06:21.425 size: 0.000122 MiB name: rte_cryptodev_data_90 00:06:21.425 size: 0.000122 MiB name: rte_cryptodev_data_91 00:06:21.425 size: 0.000122 MiB name: rte_compressdev_data_45 00:06:21.425 size: 0.000122 MiB name: rte_cryptodev_data_92 00:06:21.425 size: 0.000122 MiB name: rte_cryptodev_data_93 00:06:21.425 size: 0.000122 MiB name: rte_compressdev_data_46 00:06:21.425 size: 0.000122 MiB name: rte_cryptodev_data_94 00:06:21.425 size: 0.000122 MiB name: rte_cryptodev_data_95 00:06:21.425 size: 0.000122 MiB name: rte_compressdev_data_47 00:06:21.425 size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:06:21.425 end memzones------- 00:06:21.425 18:21:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:21.425 heap id: 0 total size: 814.000000 MiB number of busy elements: 627 number of free elements: 14 00:06:21.425 list of free elements. size: 11.783203 MiB 00:06:21.425 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:21.425 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:21.425 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:21.425 element at address: 0x200003e00000 with size: 0.996460 MiB 00:06:21.425 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:21.425 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:21.425 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:21.425 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:21.425 element at address: 0x20001aa00000 with size: 0.566589 MiB 00:06:21.425 element at address: 0x200003a00000 with size: 0.494324 MiB 00:06:21.425 element at address: 0x20000b200000 with size: 0.489258 MiB 00:06:21.425 element at address: 0x200000800000 with size: 0.486145 MiB 00:06:21.425 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:21.425 element at address: 0x200027e00000 with size: 0.395935 MiB 00:06:21.425 list of standard malloc elements. size: 199.896790 MiB 00:06:21.425 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:21.425 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:21.425 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:21.425 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:21.425 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:21.425 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:21.425 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:21.425 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:21.425 element at address: 0x20000032bc80 with size: 0.004395 MiB 00:06:21.425 element at address: 0x20000032f740 with size: 0.004395 MiB 00:06:21.425 element at address: 0x200000333200 with size: 0.004395 MiB 00:06:21.425 element at address: 0x200000336cc0 with size: 0.004395 MiB 00:06:21.425 element at address: 0x20000033a780 with size: 0.004395 MiB 00:06:21.425 element at address: 0x20000033e240 with size: 0.004395 MiB 00:06:21.425 element at address: 0x200000341d00 with size: 0.004395 MiB 00:06:21.425 element at address: 0x2000003457c0 with size: 0.004395 MiB 00:06:21.425 element at address: 0x200000349280 with size: 0.004395 MiB 00:06:21.425 element at address: 0x20000034cd40 with size: 0.004395 MiB 00:06:21.425 element at address: 0x200000350800 with size: 0.004395 MiB 00:06:21.425 element at address: 0x2000003542c0 with size: 0.004395 MiB 00:06:21.425 element at address: 0x200000357d80 with size: 0.004395 MiB 00:06:21.425 element at address: 0x20000035b840 with size: 0.004395 MiB 00:06:21.425 element at address: 0x20000035f300 with size: 0.004395 MiB 00:06:21.425 element at address: 0x200000362dc0 with size: 0.004395 MiB 00:06:21.425 element at address: 0x200000366880 with size: 0.004395 MiB 00:06:21.425 element at address: 0x20000036a340 with size: 0.004395 MiB 00:06:21.425 element at address: 0x20000036de00 with size: 0.004395 MiB 00:06:21.425 element at address: 0x2000003718c0 with size: 0.004395 MiB 00:06:21.425 element at address: 0x200000375380 with size: 0.004395 MiB 00:06:21.425 element at address: 0x200000378e40 with size: 0.004395 MiB 00:06:21.425 element at address: 0x20000037c900 with size: 0.004395 MiB 00:06:21.425 element at address: 0x2000003803c0 with size: 0.004395 MiB 00:06:21.425 element at address: 0x200000383e80 with size: 0.004395 MiB 00:06:21.425 element at address: 0x200000387940 with size: 0.004395 MiB 00:06:21.425 element at address: 0x20000038b400 with size: 0.004395 MiB 00:06:21.425 element at address: 0x20000038eec0 with size: 0.004395 MiB 00:06:21.425 element at address: 0x200000392980 with size: 0.004395 MiB 00:06:21.425 element at address: 0x200000396440 with size: 0.004395 MiB 00:06:21.425 element at address: 0x200000399f00 with size: 0.004395 MiB 00:06:21.425 element at address: 0x20000039d9c0 with size: 0.004395 MiB 00:06:21.425 element at address: 0x2000003a1480 with size: 0.004395 MiB 00:06:21.425 element at address: 0x2000003a4f40 with size: 0.004395 MiB 00:06:21.425 element at address: 0x2000003a8a00 with size: 0.004395 MiB 00:06:21.425 element at address: 0x2000003ac4c0 with size: 0.004395 MiB 00:06:21.425 element at address: 0x2000003aff80 with size: 0.004395 MiB 00:06:21.425 element at address: 0x2000003b3a40 with size: 0.004395 MiB 00:06:21.425 element at address: 0x2000003b7500 with size: 0.004395 MiB 00:06:21.425 element at address: 0x2000003bafc0 with size: 0.004395 MiB 00:06:21.425 element at address: 0x2000003bea80 with size: 0.004395 MiB 00:06:21.425 element at address: 0x2000003c2540 with size: 0.004395 MiB 00:06:21.425 element at address: 0x2000003c6000 with size: 0.004395 MiB 00:06:21.425 element at address: 0x2000003c9ac0 with size: 0.004395 MiB 00:06:21.425 element at address: 0x2000003cd580 with size: 0.004395 MiB 00:06:21.425 element at address: 0x2000003d1040 with size: 0.004395 MiB 00:06:21.425 element at address: 0x2000003d4b00 with size: 0.004395 MiB 00:06:21.425 element at address: 0x2000003d8d00 with size: 0.004395 MiB 00:06:21.425 element at address: 0x200000329b80 with size: 0.004028 MiB 00:06:21.425 element at address: 0x20000032ac00 with size: 0.004028 MiB 00:06:21.425 element at address: 0x20000032d640 with size: 0.004028 MiB 00:06:21.425 element at address: 0x20000032e6c0 with size: 0.004028 MiB 00:06:21.425 element at address: 0x200000331100 with size: 0.004028 MiB 00:06:21.425 element at address: 0x200000332180 with size: 0.004028 MiB 00:06:21.425 element at address: 0x200000334bc0 with size: 0.004028 MiB 00:06:21.425 element at address: 0x200000335c40 with size: 0.004028 MiB 00:06:21.425 element at address: 0x200000338680 with size: 0.004028 MiB 00:06:21.425 element at address: 0x200000339700 with size: 0.004028 MiB 00:06:21.425 element at address: 0x20000033c140 with size: 0.004028 MiB 00:06:21.425 element at address: 0x20000033d1c0 with size: 0.004028 MiB 00:06:21.425 element at address: 0x20000033fc00 with size: 0.004028 MiB 00:06:21.425 element at address: 0x200000340c80 with size: 0.004028 MiB 00:06:21.425 element at address: 0x2000003436c0 with size: 0.004028 MiB 00:06:21.425 element at address: 0x200000344740 with size: 0.004028 MiB 00:06:21.425 element at address: 0x200000347180 with size: 0.004028 MiB 00:06:21.425 element at address: 0x200000348200 with size: 0.004028 MiB 00:06:21.425 element at address: 0x20000034ac40 with size: 0.004028 MiB 00:06:21.425 element at address: 0x20000034bcc0 with size: 0.004028 MiB 00:06:21.425 element at address: 0x20000034e700 with size: 0.004028 MiB 00:06:21.425 element at address: 0x20000034f780 with size: 0.004028 MiB 00:06:21.425 element at address: 0x2000003521c0 with size: 0.004028 MiB 00:06:21.425 element at address: 0x200000353240 with size: 0.004028 MiB 00:06:21.425 element at address: 0x200000355c80 with size: 0.004028 MiB 00:06:21.425 element at address: 0x200000356d00 with size: 0.004028 MiB 00:06:21.425 element at address: 0x200000359740 with size: 0.004028 MiB 00:06:21.425 element at address: 0x20000035a7c0 with size: 0.004028 MiB 00:06:21.425 element at address: 0x20000035d200 with size: 0.004028 MiB 00:06:21.425 element at address: 0x20000035e280 with size: 0.004028 MiB 00:06:21.425 element at address: 0x200000360cc0 with size: 0.004028 MiB 00:06:21.425 element at address: 0x200000361d40 with size: 0.004028 MiB 00:06:21.425 element at address: 0x200000364780 with size: 0.004028 MiB 00:06:21.425 element at address: 0x200000365800 with size: 0.004028 MiB 00:06:21.425 element at address: 0x200000368240 with size: 0.004028 MiB 00:06:21.425 element at address: 0x2000003692c0 with size: 0.004028 MiB 00:06:21.425 element at address: 0x20000036bd00 with size: 0.004028 MiB 00:06:21.425 element at address: 0x20000036cd80 with size: 0.004028 MiB 00:06:21.425 element at address: 0x20000036f7c0 with size: 0.004028 MiB 00:06:21.425 element at address: 0x200000370840 with size: 0.004028 MiB 00:06:21.425 element at address: 0x200000373280 with size: 0.004028 MiB 00:06:21.425 element at address: 0x200000374300 with size: 0.004028 MiB 00:06:21.425 element at address: 0x200000376d40 with size: 0.004028 MiB 00:06:21.425 element at address: 0x200000377dc0 with size: 0.004028 MiB 00:06:21.425 element at address: 0x20000037a800 with size: 0.004028 MiB 00:06:21.425 element at address: 0x20000037b880 with size: 0.004028 MiB 00:06:21.425 element at address: 0x20000037e2c0 with size: 0.004028 MiB 00:06:21.425 element at address: 0x20000037f340 with size: 0.004028 MiB 00:06:21.425 element at address: 0x200000381d80 with size: 0.004028 MiB 00:06:21.425 element at address: 0x200000382e00 with size: 0.004028 MiB 00:06:21.425 element at address: 0x200000385840 with size: 0.004028 MiB 00:06:21.425 element at address: 0x2000003868c0 with size: 0.004028 MiB 00:06:21.425 element at address: 0x200000389300 with size: 0.004028 MiB 00:06:21.425 element at address: 0x20000038a380 with size: 0.004028 MiB 00:06:21.425 element at address: 0x20000038cdc0 with size: 0.004028 MiB 00:06:21.425 element at address: 0x20000038de40 with size: 0.004028 MiB 00:06:21.425 element at address: 0x200000390880 with size: 0.004028 MiB 00:06:21.425 element at address: 0x200000391900 with size: 0.004028 MiB 00:06:21.425 element at address: 0x200000394340 with size: 0.004028 MiB 00:06:21.425 element at address: 0x2000003953c0 with size: 0.004028 MiB 00:06:21.425 element at address: 0x200000397e00 with size: 0.004028 MiB 00:06:21.425 element at address: 0x200000398e80 with size: 0.004028 MiB 00:06:21.425 element at address: 0x20000039b8c0 with size: 0.004028 MiB 00:06:21.425 element at address: 0x20000039c940 with size: 0.004028 MiB 00:06:21.425 element at address: 0x20000039f380 with size: 0.004028 MiB 00:06:21.425 element at address: 0x2000003a0400 with size: 0.004028 MiB 00:06:21.425 element at address: 0x2000003a2e40 with size: 0.004028 MiB 00:06:21.425 element at address: 0x2000003a3ec0 with size: 0.004028 MiB 00:06:21.425 element at address: 0x2000003a6900 with size: 0.004028 MiB 00:06:21.425 element at address: 0x2000003a7980 with size: 0.004028 MiB 00:06:21.425 element at address: 0x2000003aa3c0 with size: 0.004028 MiB 00:06:21.425 element at address: 0x2000003ab440 with size: 0.004028 MiB 00:06:21.425 element at address: 0x2000003ade80 with size: 0.004028 MiB 00:06:21.425 element at address: 0x2000003aef00 with size: 0.004028 MiB 00:06:21.426 element at address: 0x2000003b1940 with size: 0.004028 MiB 00:06:21.426 element at address: 0x2000003b29c0 with size: 0.004028 MiB 00:06:21.426 element at address: 0x2000003b5400 with size: 0.004028 MiB 00:06:21.426 element at address: 0x2000003b6480 with size: 0.004028 MiB 00:06:21.426 element at address: 0x2000003b8ec0 with size: 0.004028 MiB 00:06:21.426 element at address: 0x2000003b9f40 with size: 0.004028 MiB 00:06:21.426 element at address: 0x2000003bc980 with size: 0.004028 MiB 00:06:21.426 element at address: 0x2000003bda00 with size: 0.004028 MiB 00:06:21.426 element at address: 0x2000003c0440 with size: 0.004028 MiB 00:06:21.426 element at address: 0x2000003c14c0 with size: 0.004028 MiB 00:06:21.426 element at address: 0x2000003c3f00 with size: 0.004028 MiB 00:06:21.426 element at address: 0x2000003c4f80 with size: 0.004028 MiB 00:06:21.426 element at address: 0x2000003c79c0 with size: 0.004028 MiB 00:06:21.426 element at address: 0x2000003c8a40 with size: 0.004028 MiB 00:06:21.426 element at address: 0x2000003cb480 with size: 0.004028 MiB 00:06:21.426 element at address: 0x2000003cc500 with size: 0.004028 MiB 00:06:21.426 element at address: 0x2000003cef40 with size: 0.004028 MiB 00:06:21.426 element at address: 0x2000003cffc0 with size: 0.004028 MiB 00:06:21.426 element at address: 0x2000003d2a00 with size: 0.004028 MiB 00:06:21.426 element at address: 0x2000003d3a80 with size: 0.004028 MiB 00:06:21.426 element at address: 0x2000003d6c00 with size: 0.004028 MiB 00:06:21.426 element at address: 0x2000003d7c80 with size: 0.004028 MiB 00:06:21.426 element at address: 0x200000200000 with size: 0.000305 MiB 00:06:21.426 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:21.426 element at address: 0x200000200140 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000200200 with size: 0.000183 MiB 00:06:21.426 element at address: 0x2000002002c0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000200380 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000200440 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000200500 with size: 0.000183 MiB 00:06:21.426 element at address: 0x2000002005c0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000200680 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000200740 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000200800 with size: 0.000183 MiB 00:06:21.426 element at address: 0x2000002008c0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000200980 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000200a40 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000200b00 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000200bc0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000200c80 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000200d40 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000200e00 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000200ec0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x2000002010c0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000205380 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000225640 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000225700 with size: 0.000183 MiB 00:06:21.426 element at address: 0x2000002257c0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000225880 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000225940 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000225a00 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000225ac0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000225b80 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000225c40 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000225d00 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000225dc0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000225e80 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000225f40 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000226000 with size: 0.000183 MiB 00:06:21.426 element at address: 0x2000002260c0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000226180 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000226240 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000226300 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000226500 with size: 0.000183 MiB 00:06:21.426 element at address: 0x2000002265c0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000226680 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000226740 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000226800 with size: 0.000183 MiB 00:06:21.426 element at address: 0x2000002268c0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000226980 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000226a40 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000226b00 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000226bc0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000226c80 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000226d40 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000226e00 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000226ec0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000226f80 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000227040 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000227100 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000329300 with size: 0.000183 MiB 00:06:21.426 element at address: 0x2000003293c0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000329580 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000329640 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000329800 with size: 0.000183 MiB 00:06:21.426 element at address: 0x20000032ce80 with size: 0.000183 MiB 00:06:21.426 element at address: 0x20000032d040 with size: 0.000183 MiB 00:06:21.426 element at address: 0x20000032d100 with size: 0.000183 MiB 00:06:21.426 element at address: 0x20000032d2c0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000330940 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000330b00 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000330bc0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000330d80 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000334400 with size: 0.000183 MiB 00:06:21.426 element at address: 0x2000003345c0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000334680 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000334840 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000337ec0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000338080 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000338140 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000338300 with size: 0.000183 MiB 00:06:21.426 element at address: 0x20000033b980 with size: 0.000183 MiB 00:06:21.426 element at address: 0x20000033bb40 with size: 0.000183 MiB 00:06:21.426 element at address: 0x20000033bc00 with size: 0.000183 MiB 00:06:21.426 element at address: 0x20000033bdc0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x20000033f440 with size: 0.000183 MiB 00:06:21.426 element at address: 0x20000033f600 with size: 0.000183 MiB 00:06:21.426 element at address: 0x20000033f6c0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x20000033f880 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000342f00 with size: 0.000183 MiB 00:06:21.426 element at address: 0x2000003430c0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000343180 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000343340 with size: 0.000183 MiB 00:06:21.426 element at address: 0x2000003469c0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000346b80 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000346c40 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000346e00 with size: 0.000183 MiB 00:06:21.426 element at address: 0x20000034a480 with size: 0.000183 MiB 00:06:21.426 element at address: 0x20000034a640 with size: 0.000183 MiB 00:06:21.426 element at address: 0x20000034a700 with size: 0.000183 MiB 00:06:21.426 element at address: 0x20000034a8c0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x20000034df40 with size: 0.000183 MiB 00:06:21.426 element at address: 0x20000034e100 with size: 0.000183 MiB 00:06:21.426 element at address: 0x20000034e1c0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x20000034e380 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000351a00 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000351bc0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000351c80 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000351e40 with size: 0.000183 MiB 00:06:21.426 element at address: 0x2000003554c0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000355680 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000355740 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000355900 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000358f80 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000359140 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000359200 with size: 0.000183 MiB 00:06:21.426 element at address: 0x2000003593c0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x20000035ca40 with size: 0.000183 MiB 00:06:21.426 element at address: 0x20000035cc00 with size: 0.000183 MiB 00:06:21.426 element at address: 0x20000035ccc0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x20000035ce80 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000360500 with size: 0.000183 MiB 00:06:21.426 element at address: 0x2000003606c0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000360780 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000360940 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000363fc0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000364180 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000364240 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000364400 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000367a80 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000367c40 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000367d00 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000367ec0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x20000036b540 with size: 0.000183 MiB 00:06:21.426 element at address: 0x20000036b700 with size: 0.000183 MiB 00:06:21.426 element at address: 0x20000036b7c0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x20000036b980 with size: 0.000183 MiB 00:06:21.426 element at address: 0x20000036f000 with size: 0.000183 MiB 00:06:21.426 element at address: 0x20000036f1c0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x20000036f280 with size: 0.000183 MiB 00:06:21.426 element at address: 0x20000036f440 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000372ac0 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000372c80 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000372d40 with size: 0.000183 MiB 00:06:21.426 element at address: 0x200000372f00 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200000376580 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200000376740 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200000376800 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003769c0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000037a040 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000037a200 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000037a2c0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000037a480 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000037db00 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000037dcc0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000037dd80 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000037df40 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003815c0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200000381780 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200000381840 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200000381a00 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200000385080 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200000385240 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200000385300 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003854c0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200000388b40 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200000388d00 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200000388dc0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200000388f80 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000038c600 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000038c7c0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000038c880 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000038ca40 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003900c0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200000390280 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200000390340 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200000390500 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200000393b80 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200000393d40 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200000393e00 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200000393fc0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200000397640 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200000397800 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003978c0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200000397a80 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000039b100 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000039b2c0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000039b380 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000039b540 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000039ebc0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000039ed80 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000039ee40 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000039f000 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003a2680 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003a2840 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003a2900 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003a2ac0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003a6140 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003a6300 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003a63c0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003a6580 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003a9c00 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003a9dc0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003a9e80 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003aa040 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003ad6c0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003ad880 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003ad940 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003adb00 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003b1180 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003b1340 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003b1400 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003b15c0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003b4c40 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003b4e00 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003b4ec0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003b5080 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003b8700 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003b88c0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003b8980 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003b8b40 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003bc1c0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003bc380 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003bc440 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003bc600 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003bfc80 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003bfe40 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003bff00 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003c00c0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003c3740 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003c3900 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003c39c0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003c3b80 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003c7200 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003c73c0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003c7480 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003c7640 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003cacc0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003cae80 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003caf40 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003cb100 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003ce780 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003ce940 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003cea00 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003cebc0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003d2240 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003d2400 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003d24c0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003d2680 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003d5dc0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003d64c0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003d6580 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000003d6880 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000087c740 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000087c800 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000087c980 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200003a7e8c0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200003a7e980 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200003a7ea40 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200003a7eb00 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200003a7ebc0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200003a7ec80 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200003a7ed40 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200003a7ee00 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200003a7eec0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200003a7ef80 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200003a7f040 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200003a7f100 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200003a7f1c0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200003a7f280 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200003a7f340 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200003a7f400 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200003a7f4c0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200003a7f580 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200003a7f640 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200003a7f700 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200003a7f7c0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200003a7f880 with size: 0.000183 MiB 00:06:21.427 element at address: 0x200003a7f940 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:21.427 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20001aa910c0 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20001aa91180 with size: 0.000183 MiB 00:06:21.427 element at address: 0x20001aa91240 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa91300 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa913c0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa91480 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa91540 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa91600 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:21.428 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e65680 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6c280 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:21.428 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:21.429 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:21.429 list of memzone associated elements. size: 602.320007 MiB 00:06:21.429 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:21.429 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:21.429 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:21.429 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:21.429 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:21.429 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2716869_0 00:06:21.429 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:21.429 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2716869_0 00:06:21.429 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:21.429 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2716869_0 00:06:21.429 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:21.429 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:21.429 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:21.429 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:21.429 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:21.429 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2716869 00:06:21.429 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:21.429 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2716869 00:06:21.429 element at address: 0x2000002271c0 with size: 1.008118 MiB 00:06:21.429 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2716869 00:06:21.429 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:21.429 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:21.429 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:21.429 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:21.429 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:21.429 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:21.429 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:21.429 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:21.429 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:21.429 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2716869 00:06:21.429 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:21.429 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2716869 00:06:21.429 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:21.429 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2716869 00:06:21.429 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:21.429 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2716869 00:06:21.429 element at address: 0x200003a7fa00 with size: 0.500488 MiB 00:06:21.429 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2716869 00:06:21.429 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:21.429 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:21.429 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:21.429 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:21.429 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:21.429 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:21.429 element at address: 0x200000205440 with size: 0.125488 MiB 00:06:21.429 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2716869 00:06:21.429 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:21.429 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:21.429 element at address: 0x200027e65740 with size: 0.023743 MiB 00:06:21.429 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:21.429 element at address: 0x200000201180 with size: 0.016113 MiB 00:06:21.429 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2716869 00:06:21.429 element at address: 0x200027e6b880 with size: 0.002441 MiB 00:06:21.429 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:21.429 element at address: 0x2000003d5f80 with size: 0.001282 MiB 00:06:21.429 associated memzone info: size: 0.001160 MiB name: QAT_SYM_CAPA_GEN_1 00:06:21.429 element at address: 0x2000003d6a40 with size: 0.000427 MiB 00:06:21.429 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.0_qat 00:06:21.429 element at address: 0x2000003d2840 with size: 0.000427 MiB 00:06:21.429 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.1_qat 00:06:21.429 element at address: 0x2000003ced80 with size: 0.000427 MiB 00:06:21.429 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.2_qat 00:06:21.429 element at address: 0x2000003cb2c0 with size: 0.000427 MiB 00:06:21.429 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.3_qat 00:06:21.429 element at address: 0x2000003c7800 with size: 0.000427 MiB 00:06:21.429 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.4_qat 00:06:21.429 element at address: 0x2000003c3d40 with size: 0.000427 MiB 00:06:21.429 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.5_qat 00:06:21.429 element at address: 0x2000003c0280 with size: 0.000427 MiB 00:06:21.429 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.6_qat 00:06:21.429 element at address: 0x2000003bc7c0 with size: 0.000427 MiB 00:06:21.429 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.7_qat 00:06:21.429 element at address: 0x2000003b8d00 with size: 0.000427 MiB 00:06:21.429 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.0_qat 00:06:21.429 element at address: 0x2000003b5240 with size: 0.000427 MiB 00:06:21.429 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.1_qat 00:06:21.429 element at address: 0x2000003b1780 with size: 0.000427 MiB 00:06:21.429 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.2_qat 00:06:21.429 element at address: 0x2000003adcc0 with size: 0.000427 MiB 00:06:21.429 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.3_qat 00:06:21.429 element at address: 0x2000003aa200 with size: 0.000427 MiB 00:06:21.429 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.4_qat 00:06:21.429 element at address: 0x2000003a6740 with size: 0.000427 MiB 00:06:21.429 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.5_qat 00:06:21.429 element at address: 0x2000003a2c80 with size: 0.000427 MiB 00:06:21.429 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.6_qat 00:06:21.429 element at address: 0x20000039f1c0 with size: 0.000427 MiB 00:06:21.429 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.7_qat 00:06:21.429 element at address: 0x20000039b700 with size: 0.000427 MiB 00:06:21.429 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.0_qat 00:06:21.429 element at address: 0x200000397c40 with size: 0.000427 MiB 00:06:21.429 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.1_qat 00:06:21.429 element at address: 0x200000394180 with size: 0.000427 MiB 00:06:21.429 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.2_qat 00:06:21.429 element at address: 0x2000003906c0 with size: 0.000427 MiB 00:06:21.429 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.3_qat 00:06:21.429 element at address: 0x20000038cc00 with size: 0.000427 MiB 00:06:21.429 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.4_qat 00:06:21.429 element at address: 0x200000389140 with size: 0.000427 MiB 00:06:21.429 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.5_qat 00:06:21.429 element at address: 0x200000385680 with size: 0.000427 MiB 00:06:21.429 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.6_qat 00:06:21.429 element at address: 0x200000381bc0 with size: 0.000427 MiB 00:06:21.429 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.7_qat 00:06:21.429 element at address: 0x20000037e100 with size: 0.000427 MiB 00:06:21.429 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.0_qat 00:06:21.429 element at address: 0x20000037a640 with size: 0.000427 MiB 00:06:21.429 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.1_qat 00:06:21.429 element at address: 0x200000376b80 with size: 0.000427 MiB 00:06:21.429 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.2_qat 00:06:21.429 element at address: 0x2000003730c0 with size: 0.000427 MiB 00:06:21.429 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.3_qat 00:06:21.429 element at address: 0x20000036f600 with size: 0.000427 MiB 00:06:21.429 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.4_qat 00:06:21.429 element at address: 0x20000036bb40 with size: 0.000427 MiB 00:06:21.429 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.5_qat 00:06:21.429 element at address: 0x200000368080 with size: 0.000427 MiB 00:06:21.429 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.6_qat 00:06:21.429 element at address: 0x2000003645c0 with size: 0.000427 MiB 00:06:21.429 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.7_qat 00:06:21.429 element at address: 0x200000360b00 with size: 0.000427 MiB 00:06:21.429 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.0_qat 00:06:21.430 element at address: 0x20000035d040 with size: 0.000427 MiB 00:06:21.430 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.1_qat 00:06:21.430 element at address: 0x200000359580 with size: 0.000427 MiB 00:06:21.430 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.2_qat 00:06:21.430 element at address: 0x200000355ac0 with size: 0.000427 MiB 00:06:21.430 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.3_qat 00:06:21.430 element at address: 0x200000352000 with size: 0.000427 MiB 00:06:21.430 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.4_qat 00:06:21.430 element at address: 0x20000034e540 with size: 0.000427 MiB 00:06:21.430 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.5_qat 00:06:21.430 element at address: 0x20000034aa80 with size: 0.000427 MiB 00:06:21.430 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.6_qat 00:06:21.430 element at address: 0x200000346fc0 with size: 0.000427 MiB 00:06:21.430 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.7_qat 00:06:21.430 element at address: 0x200000343500 with size: 0.000427 MiB 00:06:21.430 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.0_qat 00:06:21.430 element at address: 0x20000033fa40 with size: 0.000427 MiB 00:06:21.430 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.1_qat 00:06:21.430 element at address: 0x20000033bf80 with size: 0.000427 MiB 00:06:21.430 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.2_qat 00:06:21.430 element at address: 0x2000003384c0 with size: 0.000427 MiB 00:06:21.430 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.3_qat 00:06:21.430 element at address: 0x200000334a00 with size: 0.000427 MiB 00:06:21.430 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.4_qat 00:06:21.430 element at address: 0x200000330f40 with size: 0.000427 MiB 00:06:21.430 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.5_qat 00:06:21.430 element at address: 0x20000032d480 with size: 0.000427 MiB 00:06:21.430 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.6_qat 00:06:21.430 element at address: 0x2000003299c0 with size: 0.000427 MiB 00:06:21.430 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.7_qat 00:06:21.430 element at address: 0x2000003d6740 with size: 0.000305 MiB 00:06:21.430 associated memzone info: size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:06:21.430 element at address: 0x2000002263c0 with size: 0.000305 MiB 00:06:21.430 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2716869 00:06:21.430 element at address: 0x200000200f80 with size: 0.000305 MiB 00:06:21.430 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2716869 00:06:21.430 element at address: 0x200027e6c340 with size: 0.000305 MiB 00:06:21.430 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:21.430 element at address: 0x2000003d6940 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_0 00:06:21.430 element at address: 0x2000003d6640 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_1 00:06:21.430 element at address: 0x2000003d5e80 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_0 00:06:21.430 element at address: 0x2000003d2740 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_2 00:06:21.430 element at address: 0x2000003d2580 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_3 00:06:21.430 element at address: 0x2000003d2300 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_1 00:06:21.430 element at address: 0x2000003cec80 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_4 00:06:21.430 element at address: 0x2000003ceac0 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_5 00:06:21.430 element at address: 0x2000003ce840 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_2 00:06:21.430 element at address: 0x2000003cb1c0 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_6 00:06:21.430 element at address: 0x2000003cb000 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_7 00:06:21.430 element at address: 0x2000003cad80 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_3 00:06:21.430 element at address: 0x2000003c7700 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_8 00:06:21.430 element at address: 0x2000003c7540 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_9 00:06:21.430 element at address: 0x2000003c72c0 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_4 00:06:21.430 element at address: 0x2000003c3c40 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_10 00:06:21.430 element at address: 0x2000003c3a80 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_11 00:06:21.430 element at address: 0x2000003c3800 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_5 00:06:21.430 element at address: 0x2000003c0180 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_12 00:06:21.430 element at address: 0x2000003bffc0 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_13 00:06:21.430 element at address: 0x2000003bfd40 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_6 00:06:21.430 element at address: 0x2000003bc6c0 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_14 00:06:21.430 element at address: 0x2000003bc500 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_15 00:06:21.430 element at address: 0x2000003bc280 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_7 00:06:21.430 element at address: 0x2000003b8c00 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_16 00:06:21.430 element at address: 0x2000003b8a40 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_17 00:06:21.430 element at address: 0x2000003b87c0 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_8 00:06:21.430 element at address: 0x2000003b5140 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_18 00:06:21.430 element at address: 0x2000003b4f80 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_19 00:06:21.430 element at address: 0x2000003b4d00 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_9 00:06:21.430 element at address: 0x2000003b1680 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_20 00:06:21.430 element at address: 0x2000003b14c0 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_21 00:06:21.430 element at address: 0x2000003b1240 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_10 00:06:21.430 element at address: 0x2000003adbc0 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_22 00:06:21.430 element at address: 0x2000003ada00 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_23 00:06:21.430 element at address: 0x2000003ad780 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_11 00:06:21.430 element at address: 0x2000003aa100 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_24 00:06:21.430 element at address: 0x2000003a9f40 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_25 00:06:21.430 element at address: 0x2000003a9cc0 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_12 00:06:21.430 element at address: 0x2000003a6640 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_26 00:06:21.430 element at address: 0x2000003a6480 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_27 00:06:21.430 element at address: 0x2000003a6200 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_13 00:06:21.430 element at address: 0x2000003a2b80 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_28 00:06:21.430 element at address: 0x2000003a29c0 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_29 00:06:21.430 element at address: 0x2000003a2740 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_14 00:06:21.430 element at address: 0x20000039f0c0 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_30 00:06:21.430 element at address: 0x20000039ef00 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_31 00:06:21.430 element at address: 0x20000039ec80 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_15 00:06:21.430 element at address: 0x20000039b600 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_32 00:06:21.430 element at address: 0x20000039b440 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_33 00:06:21.430 element at address: 0x20000039b1c0 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_16 00:06:21.430 element at address: 0x200000397b40 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_34 00:06:21.430 element at address: 0x200000397980 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_35 00:06:21.430 element at address: 0x200000397700 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_17 00:06:21.430 element at address: 0x200000394080 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_36 00:06:21.430 element at address: 0x200000393ec0 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_37 00:06:21.430 element at address: 0x200000393c40 with size: 0.000244 MiB 00:06:21.430 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_18 00:06:21.430 element at address: 0x2000003905c0 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_38 00:06:21.431 element at address: 0x200000390400 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_39 00:06:21.431 element at address: 0x200000390180 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_19 00:06:21.431 element at address: 0x20000038cb00 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_40 00:06:21.431 element at address: 0x20000038c940 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_41 00:06:21.431 element at address: 0x20000038c6c0 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_20 00:06:21.431 element at address: 0x200000389040 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_42 00:06:21.431 element at address: 0x200000388e80 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_43 00:06:21.431 element at address: 0x200000388c00 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_21 00:06:21.431 element at address: 0x200000385580 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_44 00:06:21.431 element at address: 0x2000003853c0 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_45 00:06:21.431 element at address: 0x200000385140 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_22 00:06:21.431 element at address: 0x200000381ac0 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_46 00:06:21.431 element at address: 0x200000381900 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_47 00:06:21.431 element at address: 0x200000381680 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_23 00:06:21.431 element at address: 0x20000037e000 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_48 00:06:21.431 element at address: 0x20000037de40 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_49 00:06:21.431 element at address: 0x20000037dbc0 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_24 00:06:21.431 element at address: 0x20000037a540 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_50 00:06:21.431 element at address: 0x20000037a380 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_51 00:06:21.431 element at address: 0x20000037a100 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_25 00:06:21.431 element at address: 0x200000376a80 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_52 00:06:21.431 element at address: 0x2000003768c0 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_53 00:06:21.431 element at address: 0x200000376640 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_26 00:06:21.431 element at address: 0x200000372fc0 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_54 00:06:21.431 element at address: 0x200000372e00 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_55 00:06:21.431 element at address: 0x200000372b80 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_27 00:06:21.431 element at address: 0x20000036f500 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_56 00:06:21.431 element at address: 0x20000036f340 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_57 00:06:21.431 element at address: 0x20000036f0c0 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_28 00:06:21.431 element at address: 0x20000036ba40 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_58 00:06:21.431 element at address: 0x20000036b880 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_59 00:06:21.431 element at address: 0x20000036b600 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_29 00:06:21.431 element at address: 0x200000367f80 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_60 00:06:21.431 element at address: 0x200000367dc0 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_61 00:06:21.431 element at address: 0x200000367b40 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_30 00:06:21.431 element at address: 0x2000003644c0 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_62 00:06:21.431 element at address: 0x200000364300 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_63 00:06:21.431 element at address: 0x200000364080 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_31 00:06:21.431 element at address: 0x200000360a00 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_64 00:06:21.431 element at address: 0x200000360840 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_65 00:06:21.431 element at address: 0x2000003605c0 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_32 00:06:21.431 element at address: 0x20000035cf40 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_66 00:06:21.431 element at address: 0x20000035cd80 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_67 00:06:21.431 element at address: 0x20000035cb00 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_33 00:06:21.431 element at address: 0x200000359480 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_68 00:06:21.431 element at address: 0x2000003592c0 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_69 00:06:21.431 element at address: 0x200000359040 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_34 00:06:21.431 element at address: 0x2000003559c0 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_70 00:06:21.431 element at address: 0x200000355800 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_71 00:06:21.431 element at address: 0x200000355580 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_35 00:06:21.431 element at address: 0x200000351f00 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_72 00:06:21.431 element at address: 0x200000351d40 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_73 00:06:21.431 element at address: 0x200000351ac0 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_36 00:06:21.431 element at address: 0x20000034e440 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_74 00:06:21.431 element at address: 0x20000034e280 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_75 00:06:21.431 element at address: 0x20000034e000 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_37 00:06:21.431 element at address: 0x20000034a980 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_76 00:06:21.431 element at address: 0x20000034a7c0 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_77 00:06:21.431 element at address: 0x20000034a540 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_38 00:06:21.431 element at address: 0x200000346ec0 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_78 00:06:21.431 element at address: 0x200000346d00 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_79 00:06:21.431 element at address: 0x200000346a80 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_39 00:06:21.431 element at address: 0x200000343400 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_80 00:06:21.431 element at address: 0x200000343240 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_81 00:06:21.431 element at address: 0x200000342fc0 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_40 00:06:21.431 element at address: 0x20000033f940 with size: 0.000244 MiB 00:06:21.431 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_82 00:06:21.431 element at address: 0x20000033f780 with size: 0.000244 MiB 00:06:21.432 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_83 00:06:21.432 element at address: 0x20000033f500 with size: 0.000244 MiB 00:06:21.432 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_41 00:06:21.432 element at address: 0x20000033be80 with size: 0.000244 MiB 00:06:21.432 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_84 00:06:21.432 element at address: 0x20000033bcc0 with size: 0.000244 MiB 00:06:21.432 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_85 00:06:21.432 element at address: 0x20000033ba40 with size: 0.000244 MiB 00:06:21.432 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_42 00:06:21.432 element at address: 0x2000003383c0 with size: 0.000244 MiB 00:06:21.432 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_86 00:06:21.432 element at address: 0x200000338200 with size: 0.000244 MiB 00:06:21.432 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_87 00:06:21.432 element at address: 0x200000337f80 with size: 0.000244 MiB 00:06:21.432 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_43 00:06:21.432 element at address: 0x200000334900 with size: 0.000244 MiB 00:06:21.432 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_88 00:06:21.432 element at address: 0x200000334740 with size: 0.000244 MiB 00:06:21.432 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_89 00:06:21.432 element at address: 0x2000003344c0 with size: 0.000244 MiB 00:06:21.432 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_44 00:06:21.432 element at address: 0x200000330e40 with size: 0.000244 MiB 00:06:21.432 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_90 00:06:21.432 element at address: 0x200000330c80 with size: 0.000244 MiB 00:06:21.432 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_91 00:06:21.432 element at address: 0x200000330a00 with size: 0.000244 MiB 00:06:21.432 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_45 00:06:21.432 element at address: 0x20000032d380 with size: 0.000244 MiB 00:06:21.432 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_92 00:06:21.432 element at address: 0x20000032d1c0 with size: 0.000244 MiB 00:06:21.432 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_93 00:06:21.432 element at address: 0x20000032cf40 with size: 0.000244 MiB 00:06:21.432 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_46 00:06:21.432 element at address: 0x2000003298c0 with size: 0.000244 MiB 00:06:21.432 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_94 00:06:21.432 element at address: 0x200000329700 with size: 0.000244 MiB 00:06:21.432 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_95 00:06:21.432 element at address: 0x200000329480 with size: 0.000244 MiB 00:06:21.432 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_47 00:06:21.432 element at address: 0x2000003d5d00 with size: 0.000183 MiB 00:06:21.432 associated memzone info: size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:06:21.432 18:21:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:21.432 18:21:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2716869 00:06:21.432 18:21:06 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 2716869 ']' 00:06:21.432 18:21:06 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 2716869 00:06:21.432 18:21:06 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:21.432 18:21:06 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:21.432 18:21:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2716869 00:06:21.689 18:21:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:21.689 18:21:06 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:21.689 18:21:06 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2716869' 00:06:21.689 killing process with pid 2716869 00:06:21.689 18:21:06 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 2716869 00:06:21.689 18:21:06 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 2716869 00:06:21.946 00:06:21.946 real 0m1.778s 00:06:21.946 user 0m2.108s 00:06:21.946 sys 0m0.469s 00:06:21.946 18:21:07 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.946 18:21:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:21.946 ************************************ 00:06:21.946 END TEST dpdk_mem_utility 00:06:21.946 ************************************ 00:06:21.946 18:21:07 -- common/autotest_common.sh@1142 -- # return 0 00:06:21.946 18:21:07 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:06:21.946 18:21:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.946 18:21:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.946 18:21:07 -- common/autotest_common.sh@10 -- # set +x 00:06:21.946 ************************************ 00:06:21.946 START TEST event 00:06:21.946 ************************************ 00:06:21.946 18:21:07 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:06:21.946 * Looking for test storage... 00:06:21.946 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event 00:06:21.946 18:21:07 event -- event/event.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:21.946 18:21:07 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:21.946 18:21:07 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:21.946 18:21:07 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:21.946 18:21:07 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.946 18:21:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.203 ************************************ 00:06:22.203 START TEST event_perf 00:06:22.203 ************************************ 00:06:22.203 18:21:07 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:22.203 Running I/O for 1 seconds...[2024-07-15 18:21:07.534877] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:22.203 [2024-07-15 18:21:07.534944] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2717155 ] 00:06:22.203 [2024-07-15 18:21:07.634389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:22.203 [2024-07-15 18:21:07.730231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.203 [2024-07-15 18:21:07.730335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.203 [2024-07-15 18:21:07.730438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:22.203 [2024-07-15 18:21:07.730439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.573 Running I/O for 1 seconds... 00:06:23.573 lcore 0: 103612 00:06:23.573 lcore 1: 103615 00:06:23.573 lcore 2: 103618 00:06:23.573 lcore 3: 103616 00:06:23.573 done. 00:06:23.573 00:06:23.574 real 0m1.304s 00:06:23.574 user 0m4.184s 00:06:23.574 sys 0m0.110s 00:06:23.574 18:21:08 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.574 18:21:08 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:23.574 ************************************ 00:06:23.574 END TEST event_perf 00:06:23.574 ************************************ 00:06:23.574 18:21:08 event -- common/autotest_common.sh@1142 -- # return 0 00:06:23.574 18:21:08 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:23.574 18:21:08 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:23.574 18:21:08 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.574 18:21:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:23.574 ************************************ 00:06:23.574 START TEST event_reactor 00:06:23.574 ************************************ 00:06:23.574 18:21:08 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:23.574 [2024-07-15 18:21:08.906356] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:23.574 [2024-07-15 18:21:08.906413] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2717558 ] 00:06:23.574 [2024-07-15 18:21:09.007323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.574 [2024-07-15 18:21:09.099852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.949 test_start 00:06:24.949 oneshot 00:06:24.949 tick 100 00:06:24.949 tick 100 00:06:24.949 tick 250 00:06:24.949 tick 100 00:06:24.949 tick 100 00:06:24.949 tick 100 00:06:24.949 tick 250 00:06:24.949 tick 500 00:06:24.949 tick 100 00:06:24.949 tick 100 00:06:24.949 tick 250 00:06:24.949 tick 100 00:06:24.949 tick 100 00:06:24.949 test_end 00:06:24.949 00:06:24.949 real 0m1.301s 00:06:24.949 user 0m1.192s 00:06:24.949 sys 0m0.102s 00:06:24.949 18:21:10 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.949 18:21:10 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:24.949 ************************************ 00:06:24.949 END TEST event_reactor 00:06:24.949 ************************************ 00:06:24.949 18:21:10 event -- common/autotest_common.sh@1142 -- # return 0 00:06:24.949 18:21:10 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:24.949 18:21:10 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:24.949 18:21:10 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.949 18:21:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.949 ************************************ 00:06:24.949 START TEST event_reactor_perf 00:06:24.949 ************************************ 00:06:24.949 18:21:10 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:24.949 [2024-07-15 18:21:10.274934] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:24.949 [2024-07-15 18:21:10.274997] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2718009 ] 00:06:24.949 [2024-07-15 18:21:10.371675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.949 [2024-07-15 18:21:10.462416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.324 test_start 00:06:26.324 test_end 00:06:26.324 Performance: 298603 events per second 00:06:26.324 00:06:26.324 real 0m1.293s 00:06:26.324 user 0m1.180s 00:06:26.324 sys 0m0.107s 00:06:26.324 18:21:11 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.324 18:21:11 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:26.324 ************************************ 00:06:26.324 END TEST event_reactor_perf 00:06:26.324 ************************************ 00:06:26.324 18:21:11 event -- common/autotest_common.sh@1142 -- # return 0 00:06:26.324 18:21:11 event -- event/event.sh@49 -- # uname -s 00:06:26.324 18:21:11 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:26.324 18:21:11 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:26.324 18:21:11 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:26.324 18:21:11 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.324 18:21:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:26.324 ************************************ 00:06:26.324 START TEST event_scheduler 00:06:26.324 ************************************ 00:06:26.324 18:21:11 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:26.324 * Looking for test storage... 00:06:26.324 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler 00:06:26.324 18:21:11 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:26.324 18:21:11 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2718281 00:06:26.324 18:21:11 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:26.324 18:21:11 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:26.324 18:21:11 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2718281 00:06:26.324 18:21:11 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 2718281 ']' 00:06:26.324 18:21:11 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.324 18:21:11 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.324 18:21:11 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.324 18:21:11 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.324 18:21:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:26.324 [2024-07-15 18:21:11.754367] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:26.324 [2024-07-15 18:21:11.754411] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2718281 ] 00:06:26.583 [2024-07-15 18:21:11.883208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:26.583 [2024-07-15 18:21:12.039055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.583 [2024-07-15 18:21:12.039148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.583 [2024-07-15 18:21:12.039256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.583 [2024-07-15 18:21:12.039266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.151 18:21:12 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.151 18:21:12 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:27.151 18:21:12 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:27.151 18:21:12 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.151 18:21:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:27.151 [2024-07-15 18:21:12.646400] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:27.151 [2024-07-15 18:21:12.646445] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:27.151 [2024-07-15 18:21:12.646470] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:27.151 [2024-07-15 18:21:12.646487] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:27.151 [2024-07-15 18:21:12.646502] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:27.151 18:21:12 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.151 18:21:12 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:27.151 18:21:12 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.151 18:21:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:27.411 [2024-07-15 18:21:12.764162] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:27.411 18:21:12 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.411 18:21:12 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:27.411 18:21:12 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.411 18:21:12 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.411 18:21:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:27.411 ************************************ 00:06:27.411 START TEST scheduler_create_thread 00:06:27.411 ************************************ 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.411 2 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.411 3 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.411 4 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.411 5 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.411 6 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.411 7 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.411 8 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.411 9 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.411 10 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.411 18:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.349 18:21:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.349 18:21:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:28.349 18:21:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.349 18:21:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.726 18:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.726 18:21:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:29.726 18:21:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:29.726 18:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.726 18:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.663 18:21:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.663 00:06:30.663 real 0m3.383s 00:06:30.663 user 0m0.022s 00:06:30.663 sys 0m0.007s 00:06:30.663 18:21:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.663 18:21:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.663 ************************************ 00:06:30.663 END TEST scheduler_create_thread 00:06:30.663 ************************************ 00:06:30.922 18:21:16 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:30.922 18:21:16 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:30.922 18:21:16 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2718281 00:06:30.922 18:21:16 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 2718281 ']' 00:06:30.922 18:21:16 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 2718281 00:06:30.922 18:21:16 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:30.922 18:21:16 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:30.922 18:21:16 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2718281 00:06:30.922 18:21:16 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:30.922 18:21:16 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:30.922 18:21:16 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2718281' 00:06:30.922 killing process with pid 2718281 00:06:30.922 18:21:16 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 2718281 00:06:30.922 18:21:16 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 2718281 00:06:31.181 [2024-07-15 18:21:16.563032] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:31.440 00:06:31.440 real 0m5.289s 00:06:31.440 user 0m10.468s 00:06:31.440 sys 0m0.467s 00:06:31.440 18:21:16 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.441 18:21:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:31.441 ************************************ 00:06:31.441 END TEST event_scheduler 00:06:31.441 ************************************ 00:06:31.441 18:21:16 event -- common/autotest_common.sh@1142 -- # return 0 00:06:31.441 18:21:16 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:31.441 18:21:16 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:31.441 18:21:16 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.441 18:21:16 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.441 18:21:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:31.441 ************************************ 00:06:31.441 START TEST app_repeat 00:06:31.441 ************************************ 00:06:31.441 18:21:16 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:31.441 18:21:16 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.441 18:21:16 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.441 18:21:16 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:31.441 18:21:16 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.441 18:21:16 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:31.441 18:21:16 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:31.441 18:21:16 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:31.700 18:21:16 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2719196 00:06:31.700 18:21:16 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:31.700 18:21:16 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:31.700 18:21:16 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2719196' 00:06:31.700 Process app_repeat pid: 2719196 00:06:31.700 18:21:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:31.700 18:21:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:31.700 spdk_app_start Round 0 00:06:31.700 18:21:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2719196 /var/tmp/spdk-nbd.sock 00:06:31.700 18:21:16 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2719196 ']' 00:06:31.700 18:21:16 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:31.700 18:21:16 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.700 18:21:16 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:31.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:31.700 18:21:16 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.700 18:21:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:31.700 [2024-07-15 18:21:17.027560] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:31.700 [2024-07-15 18:21:17.027623] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2719196 ] 00:06:31.700 [2024-07-15 18:21:17.131305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:31.700 [2024-07-15 18:21:17.223736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.700 [2024-07-15 18:21:17.223742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.958 18:21:17 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.958 18:21:17 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:31.958 18:21:17 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.217 Malloc0 00:06:32.217 18:21:17 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.476 Malloc1 00:06:32.476 18:21:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.476 18:21:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.476 18:21:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.476 18:21:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:32.476 18:21:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.476 18:21:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:32.476 18:21:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.476 18:21:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.476 18:21:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.476 18:21:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:32.476 18:21:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.476 18:21:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:32.476 18:21:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:32.476 18:21:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:32.476 18:21:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.476 18:21:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:32.735 /dev/nbd0 00:06:32.735 18:21:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:32.735 18:21:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:32.735 18:21:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:32.735 18:21:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:32.735 18:21:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:32.735 18:21:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:32.735 18:21:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:32.735 18:21:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:32.735 18:21:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:32.735 18:21:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:32.735 18:21:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.735 1+0 records in 00:06:32.735 1+0 records out 00:06:32.735 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221287 s, 18.5 MB/s 00:06:32.735 18:21:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:32.735 18:21:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:32.735 18:21:18 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:32.735 18:21:18 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:32.735 18:21:18 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:32.735 18:21:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.735 18:21:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.735 18:21:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:32.994 /dev/nbd1 00:06:32.994 18:21:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:32.994 18:21:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:32.994 18:21:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:32.994 18:21:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:32.994 18:21:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:32.994 18:21:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:32.994 18:21:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:32.994 18:21:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:32.994 18:21:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:32.994 18:21:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:32.994 18:21:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.994 1+0 records in 00:06:32.994 1+0 records out 00:06:32.994 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197093 s, 20.8 MB/s 00:06:32.994 18:21:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:32.994 18:21:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:32.994 18:21:18 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:32.994 18:21:18 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:32.994 18:21:18 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:32.994 18:21:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.994 18:21:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.994 18:21:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.994 18:21:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.994 18:21:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.253 18:21:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:33.253 { 00:06:33.253 "nbd_device": "/dev/nbd0", 00:06:33.253 "bdev_name": "Malloc0" 00:06:33.253 }, 00:06:33.253 { 00:06:33.253 "nbd_device": "/dev/nbd1", 00:06:33.253 "bdev_name": "Malloc1" 00:06:33.253 } 00:06:33.253 ]' 00:06:33.253 18:21:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:33.253 { 00:06:33.253 "nbd_device": "/dev/nbd0", 00:06:33.253 "bdev_name": "Malloc0" 00:06:33.253 }, 00:06:33.253 { 00:06:33.253 "nbd_device": "/dev/nbd1", 00:06:33.253 "bdev_name": "Malloc1" 00:06:33.253 } 00:06:33.253 ]' 00:06:33.253 18:21:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.253 18:21:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:33.253 /dev/nbd1' 00:06:33.253 18:21:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:33.253 /dev/nbd1' 00:06:33.253 18:21:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:33.512 256+0 records in 00:06:33.512 256+0 records out 00:06:33.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00977615 s, 107 MB/s 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:33.512 256+0 records in 00:06:33.512 256+0 records out 00:06:33.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208737 s, 50.2 MB/s 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:33.512 256+0 records in 00:06:33.512 256+0 records out 00:06:33.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022085 s, 47.5 MB/s 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.512 18:21:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:33.771 18:21:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:33.771 18:21:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:33.771 18:21:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:33.771 18:21:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.771 18:21:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.771 18:21:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:33.771 18:21:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.771 18:21:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.771 18:21:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.771 18:21:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:34.030 18:21:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:34.030 18:21:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:34.030 18:21:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:34.030 18:21:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.030 18:21:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.030 18:21:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:34.030 18:21:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:34.030 18:21:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.030 18:21:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.030 18:21:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.030 18:21:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.030 18:21:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:34.030 18:21:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:34.030 18:21:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.290 18:21:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:34.290 18:21:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:34.290 18:21:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.290 18:21:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:34.290 18:21:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:34.290 18:21:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:34.290 18:21:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:34.290 18:21:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:34.290 18:21:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:34.290 18:21:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:34.549 18:21:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:34.807 [2024-07-15 18:21:20.104255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.807 [2024-07-15 18:21:20.194143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.807 [2024-07-15 18:21:20.194148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.807 [2024-07-15 18:21:20.239819] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:34.807 [2024-07-15 18:21:20.239866] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:38.094 18:21:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:38.094 18:21:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:38.094 spdk_app_start Round 1 00:06:38.094 18:21:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2719196 /var/tmp/spdk-nbd.sock 00:06:38.094 18:21:22 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2719196 ']' 00:06:38.094 18:21:22 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:38.094 18:21:22 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.094 18:21:22 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:38.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:38.094 18:21:22 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.094 18:21:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:38.094 18:21:23 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.094 18:21:23 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:38.094 18:21:23 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.094 Malloc0 00:06:38.094 18:21:23 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.353 Malloc1 00:06:38.353 18:21:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.353 18:21:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.353 18:21:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.353 18:21:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:38.353 18:21:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.353 18:21:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:38.353 18:21:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.353 18:21:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.353 18:21:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.353 18:21:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:38.353 18:21:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.353 18:21:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:38.353 18:21:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:38.353 18:21:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:38.353 18:21:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.353 18:21:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:38.612 /dev/nbd0 00:06:38.612 18:21:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:38.612 18:21:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:38.612 18:21:24 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:38.612 18:21:24 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:38.612 18:21:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:38.612 18:21:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:38.612 18:21:24 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:38.612 18:21:24 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:38.612 18:21:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:38.612 18:21:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:38.612 18:21:24 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:38.612 1+0 records in 00:06:38.612 1+0 records out 00:06:38.612 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222626 s, 18.4 MB/s 00:06:38.612 18:21:24 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:38.612 18:21:24 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:38.612 18:21:24 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:38.612 18:21:24 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:38.612 18:21:24 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:38.612 18:21:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.612 18:21:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.612 18:21:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:38.871 /dev/nbd1 00:06:38.871 18:21:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:38.871 18:21:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:38.871 18:21:24 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:38.871 18:21:24 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:38.871 18:21:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:38.871 18:21:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:38.871 18:21:24 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:38.871 18:21:24 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:38.871 18:21:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:38.871 18:21:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:38.871 18:21:24 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:38.871 1+0 records in 00:06:38.871 1+0 records out 00:06:38.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224906 s, 18.2 MB/s 00:06:38.871 18:21:24 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:38.871 18:21:24 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:38.871 18:21:24 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:38.871 18:21:24 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:38.871 18:21:24 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:38.871 18:21:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.871 18:21:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.871 18:21:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.871 18:21:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.871 18:21:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.130 18:21:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:39.130 { 00:06:39.130 "nbd_device": "/dev/nbd0", 00:06:39.130 "bdev_name": "Malloc0" 00:06:39.130 }, 00:06:39.130 { 00:06:39.130 "nbd_device": "/dev/nbd1", 00:06:39.130 "bdev_name": "Malloc1" 00:06:39.130 } 00:06:39.130 ]' 00:06:39.130 18:21:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:39.130 { 00:06:39.130 "nbd_device": "/dev/nbd0", 00:06:39.130 "bdev_name": "Malloc0" 00:06:39.130 }, 00:06:39.130 { 00:06:39.130 "nbd_device": "/dev/nbd1", 00:06:39.130 "bdev_name": "Malloc1" 00:06:39.130 } 00:06:39.130 ]' 00:06:39.130 18:21:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.130 18:21:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:39.130 /dev/nbd1' 00:06:39.130 18:21:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:39.130 /dev/nbd1' 00:06:39.130 18:21:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.130 18:21:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:39.130 18:21:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:39.130 18:21:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:39.130 18:21:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:39.130 18:21:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:39.130 18:21:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.130 18:21:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.130 18:21:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:39.130 18:21:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:39.130 18:21:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:39.130 18:21:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:39.131 256+0 records in 00:06:39.131 256+0 records out 00:06:39.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00982861 s, 107 MB/s 00:06:39.131 18:21:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.131 18:21:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:39.131 256+0 records in 00:06:39.131 256+0 records out 00:06:39.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020612 s, 50.9 MB/s 00:06:39.131 18:21:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.131 18:21:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:39.131 256+0 records in 00:06:39.131 256+0 records out 00:06:39.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221918 s, 47.3 MB/s 00:06:39.131 18:21:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:39.131 18:21:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.131 18:21:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.131 18:21:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:39.131 18:21:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:39.131 18:21:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:39.131 18:21:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:39.131 18:21:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.131 18:21:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:39.131 18:21:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.131 18:21:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:39.131 18:21:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:39.131 18:21:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:39.131 18:21:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.131 18:21:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.131 18:21:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:39.131 18:21:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:39.131 18:21:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.131 18:21:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:39.390 18:21:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:39.390 18:21:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:39.390 18:21:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:39.390 18:21:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.390 18:21:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.390 18:21:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:39.390 18:21:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:39.390 18:21:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.390 18:21:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.390 18:21:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:39.648 18:21:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:39.648 18:21:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:39.648 18:21:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:39.648 18:21:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.648 18:21:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.648 18:21:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:39.648 18:21:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:39.648 18:21:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.648 18:21:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.648 18:21:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.648 18:21:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.906 18:21:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:39.906 18:21:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:39.906 18:21:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.906 18:21:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:39.906 18:21:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:39.906 18:21:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.906 18:21:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:39.906 18:21:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:39.906 18:21:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:39.906 18:21:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:39.906 18:21:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:39.906 18:21:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:39.906 18:21:25 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:40.176 18:21:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:40.470 [2024-07-15 18:21:25.853872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.470 [2024-07-15 18:21:25.941539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.470 [2024-07-15 18:21:25.941545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.470 [2024-07-15 18:21:25.987253] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:40.471 [2024-07-15 18:21:25.987299] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:43.754 18:21:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:43.754 18:21:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:43.754 spdk_app_start Round 2 00:06:43.754 18:21:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2719196 /var/tmp/spdk-nbd.sock 00:06:43.754 18:21:28 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2719196 ']' 00:06:43.754 18:21:28 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:43.755 18:21:28 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.755 18:21:28 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:43.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:43.755 18:21:28 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.755 18:21:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:43.755 18:21:28 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.755 18:21:28 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:43.755 18:21:28 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:43.755 Malloc0 00:06:43.755 18:21:29 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.014 Malloc1 00:06:44.014 18:21:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:44.014 18:21:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.014 18:21:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.014 18:21:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:44.014 18:21:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.014 18:21:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:44.014 18:21:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:44.014 18:21:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.014 18:21:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.014 18:21:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:44.014 18:21:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.014 18:21:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:44.014 18:21:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:44.014 18:21:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:44.014 18:21:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.014 18:21:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:44.273 /dev/nbd0 00:06:44.273 18:21:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:44.273 18:21:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:44.273 18:21:29 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:44.273 18:21:29 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:44.273 18:21:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:44.273 18:21:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:44.273 18:21:29 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:44.273 18:21:29 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:44.273 18:21:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:44.273 18:21:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:44.273 18:21:29 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:44.273 1+0 records in 00:06:44.273 1+0 records out 00:06:44.273 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235438 s, 17.4 MB/s 00:06:44.273 18:21:29 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:44.273 18:21:29 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:44.273 18:21:29 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:44.273 18:21:29 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:44.273 18:21:29 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:44.273 18:21:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.273 18:21:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.273 18:21:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:44.533 /dev/nbd1 00:06:44.533 18:21:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:44.533 18:21:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:44.533 18:21:29 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:44.533 18:21:29 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:44.533 18:21:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:44.533 18:21:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:44.533 18:21:29 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:44.533 18:21:29 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:44.533 18:21:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:44.533 18:21:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:44.533 18:21:29 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:44.533 1+0 records in 00:06:44.533 1+0 records out 00:06:44.533 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204843 s, 20.0 MB/s 00:06:44.533 18:21:29 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:44.533 18:21:29 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:44.533 18:21:29 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:44.533 18:21:29 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:44.533 18:21:29 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:44.533 18:21:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.533 18:21:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.533 18:21:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.533 18:21:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.533 18:21:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:44.792 18:21:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:44.792 { 00:06:44.792 "nbd_device": "/dev/nbd0", 00:06:44.792 "bdev_name": "Malloc0" 00:06:44.792 }, 00:06:44.792 { 00:06:44.792 "nbd_device": "/dev/nbd1", 00:06:44.792 "bdev_name": "Malloc1" 00:06:44.792 } 00:06:44.792 ]' 00:06:44.792 18:21:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:44.792 { 00:06:44.792 "nbd_device": "/dev/nbd0", 00:06:44.792 "bdev_name": "Malloc0" 00:06:44.792 }, 00:06:44.792 { 00:06:44.792 "nbd_device": "/dev/nbd1", 00:06:44.792 "bdev_name": "Malloc1" 00:06:44.792 } 00:06:44.792 ]' 00:06:44.792 18:21:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:44.792 18:21:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:44.792 /dev/nbd1' 00:06:44.792 18:21:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:44.792 /dev/nbd1' 00:06:44.792 18:21:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:44.792 18:21:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:44.792 18:21:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:44.792 18:21:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:44.792 18:21:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:44.792 18:21:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:44.792 18:21:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.792 18:21:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:44.792 18:21:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:44.792 18:21:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:44.792 18:21:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:44.792 18:21:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:44.792 256+0 records in 00:06:44.792 256+0 records out 00:06:44.792 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103094 s, 102 MB/s 00:06:44.792 18:21:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:44.792 18:21:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:44.792 256+0 records in 00:06:44.792 256+0 records out 00:06:44.792 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214823 s, 48.8 MB/s 00:06:44.792 18:21:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:44.792 18:21:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:45.051 256+0 records in 00:06:45.051 256+0 records out 00:06:45.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220272 s, 47.6 MB/s 00:06:45.051 18:21:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:45.051 18:21:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.051 18:21:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.051 18:21:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:45.051 18:21:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.051 18:21:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:45.051 18:21:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:45.051 18:21:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.051 18:21:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:45.051 18:21:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.051 18:21:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:45.051 18:21:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.051 18:21:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:45.051 18:21:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.051 18:21:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.051 18:21:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:45.051 18:21:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:45.051 18:21:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.051 18:21:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:45.310 18:21:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:45.310 18:21:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:45.310 18:21:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:45.310 18:21:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.310 18:21:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.310 18:21:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:45.310 18:21:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:45.310 18:21:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.310 18:21:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.310 18:21:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:45.567 18:21:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:45.567 18:21:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:45.567 18:21:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:45.567 18:21:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.567 18:21:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.567 18:21:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:45.567 18:21:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:45.567 18:21:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.568 18:21:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.568 18:21:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.568 18:21:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.825 18:21:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:45.825 18:21:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:45.825 18:21:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.825 18:21:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:45.825 18:21:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:45.825 18:21:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.825 18:21:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:45.825 18:21:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:45.825 18:21:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:45.825 18:21:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:45.825 18:21:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:45.825 18:21:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:45.825 18:21:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:46.083 18:21:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:46.341 [2024-07-15 18:21:31.675176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:46.342 [2024-07-15 18:21:31.763410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.342 [2024-07-15 18:21:31.763415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.342 [2024-07-15 18:21:31.809628] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:46.342 [2024-07-15 18:21:31.809687] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:49.623 18:21:34 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2719196 /var/tmp/spdk-nbd.sock 00:06:49.623 18:21:34 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2719196 ']' 00:06:49.623 18:21:34 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:49.623 18:21:34 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.623 18:21:34 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:49.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:49.623 18:21:34 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.623 18:21:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:49.623 18:21:34 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.623 18:21:34 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:49.623 18:21:34 event.app_repeat -- event/event.sh@39 -- # killprocess 2719196 00:06:49.623 18:21:34 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 2719196 ']' 00:06:49.623 18:21:34 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 2719196 00:06:49.623 18:21:34 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:49.623 18:21:34 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:49.623 18:21:34 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2719196 00:06:49.623 18:21:34 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:49.623 18:21:34 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:49.623 18:21:34 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2719196' 00:06:49.623 killing process with pid 2719196 00:06:49.623 18:21:34 event.app_repeat -- common/autotest_common.sh@967 -- # kill 2719196 00:06:49.624 18:21:34 event.app_repeat -- common/autotest_common.sh@972 -- # wait 2719196 00:06:49.624 spdk_app_start is called in Round 0. 00:06:49.624 Shutdown signal received, stop current app iteration 00:06:49.624 Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 reinitialization... 00:06:49.624 spdk_app_start is called in Round 1. 00:06:49.624 Shutdown signal received, stop current app iteration 00:06:49.624 Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 reinitialization... 00:06:49.624 spdk_app_start is called in Round 2. 00:06:49.624 Shutdown signal received, stop current app iteration 00:06:49.624 Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 reinitialization... 00:06:49.624 spdk_app_start is called in Round 3. 00:06:49.624 Shutdown signal received, stop current app iteration 00:06:49.624 18:21:34 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:49.624 18:21:34 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:49.624 00:06:49.624 real 0m17.981s 00:06:49.624 user 0m39.877s 00:06:49.624 sys 0m2.910s 00:06:49.624 18:21:34 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.624 18:21:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:49.624 ************************************ 00:06:49.624 END TEST app_repeat 00:06:49.624 ************************************ 00:06:49.624 18:21:34 event -- common/autotest_common.sh@1142 -- # return 0 00:06:49.624 18:21:34 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:49.624 00:06:49.624 real 0m27.612s 00:06:49.624 user 0m57.080s 00:06:49.624 sys 0m3.991s 00:06:49.624 18:21:34 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.624 18:21:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:49.624 ************************************ 00:06:49.624 END TEST event 00:06:49.624 ************************************ 00:06:49.624 18:21:35 -- common/autotest_common.sh@1142 -- # return 0 00:06:49.624 18:21:35 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:06:49.624 18:21:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.624 18:21:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.624 18:21:35 -- common/autotest_common.sh@10 -- # set +x 00:06:49.624 ************************************ 00:06:49.624 START TEST thread 00:06:49.624 ************************************ 00:06:49.624 18:21:35 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:06:49.624 * Looking for test storage... 00:06:49.624 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread 00:06:49.624 18:21:35 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:49.624 18:21:35 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:49.624 18:21:35 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.624 18:21:35 thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.883 ************************************ 00:06:49.883 START TEST thread_poller_perf 00:06:49.883 ************************************ 00:06:49.883 18:21:35 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:49.883 [2024-07-15 18:21:35.218509] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:49.883 [2024-07-15 18:21:35.218580] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2722320 ] 00:06:49.883 [2024-07-15 18:21:35.316256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.883 [2024-07-15 18:21:35.412436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.883 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:51.258 ====================================== 00:06:51.258 busy:2113558472 (cyc) 00:06:51.258 total_run_count: 244000 00:06:51.258 tsc_hz: 2100000000 (cyc) 00:06:51.258 ====================================== 00:06:51.258 poller_cost: 8662 (cyc), 4124 (nsec) 00:06:51.258 00:06:51.258 real 0m1.315s 00:06:51.258 user 0m1.203s 00:06:51.258 sys 0m0.106s 00:06:51.258 18:21:36 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.258 18:21:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:51.258 ************************************ 00:06:51.258 END TEST thread_poller_perf 00:06:51.258 ************************************ 00:06:51.258 18:21:36 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:51.258 18:21:36 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:51.258 18:21:36 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:51.258 18:21:36 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.258 18:21:36 thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.258 ************************************ 00:06:51.258 START TEST thread_poller_perf 00:06:51.258 ************************************ 00:06:51.258 18:21:36 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:51.258 [2024-07-15 18:21:36.595056] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:51.258 [2024-07-15 18:21:36.595110] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2722567 ] 00:06:51.258 [2024-07-15 18:21:36.690679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.258 [2024-07-15 18:21:36.781932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.258 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:52.634 ====================================== 00:06:52.634 busy:2102093774 (cyc) 00:06:52.634 total_run_count: 3231000 00:06:52.634 tsc_hz: 2100000000 (cyc) 00:06:52.635 ====================================== 00:06:52.635 poller_cost: 650 (cyc), 309 (nsec) 00:06:52.635 00:06:52.635 real 0m1.294s 00:06:52.635 user 0m1.187s 00:06:52.635 sys 0m0.102s 00:06:52.635 18:21:37 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.635 18:21:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:52.635 ************************************ 00:06:52.635 END TEST thread_poller_perf 00:06:52.635 ************************************ 00:06:52.635 18:21:37 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:52.635 18:21:37 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:52.635 00:06:52.635 real 0m2.831s 00:06:52.635 user 0m2.483s 00:06:52.635 sys 0m0.356s 00:06:52.635 18:21:37 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.635 18:21:37 thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.635 ************************************ 00:06:52.635 END TEST thread 00:06:52.635 ************************************ 00:06:52.635 18:21:37 -- common/autotest_common.sh@1142 -- # return 0 00:06:52.635 18:21:37 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:06:52.635 18:21:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.635 18:21:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.635 18:21:37 -- common/autotest_common.sh@10 -- # set +x 00:06:52.635 ************************************ 00:06:52.635 START TEST accel 00:06:52.635 ************************************ 00:06:52.635 18:21:37 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:06:52.635 * Looking for test storage... 00:06:52.635 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:06:52.635 18:21:38 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:52.635 18:21:38 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:52.635 18:21:38 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:52.635 18:21:38 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2722921 00:06:52.635 18:21:38 accel -- accel/accel.sh@63 -- # waitforlisten 2722921 00:06:52.635 18:21:38 accel -- common/autotest_common.sh@829 -- # '[' -z 2722921 ']' 00:06:52.635 18:21:38 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.635 18:21:38 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:52.635 18:21:38 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.635 18:21:38 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:52.635 18:21:38 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.635 18:21:38 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.635 18:21:38 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.635 18:21:38 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.635 18:21:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.635 18:21:38 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.635 18:21:38 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.635 18:21:38 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.635 18:21:38 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:52.635 18:21:38 accel -- accel/accel.sh@41 -- # jq -r . 00:06:52.635 [2024-07-15 18:21:38.121820] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:52.635 [2024-07-15 18:21:38.121886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2722921 ] 00:06:52.892 [2024-07-15 18:21:38.220705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.892 [2024-07-15 18:21:38.316833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.826 18:21:39 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.826 18:21:39 accel -- common/autotest_common.sh@862 -- # return 0 00:06:53.826 18:21:39 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:53.826 18:21:39 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:53.826 18:21:39 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:53.826 18:21:39 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:53.826 18:21:39 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:53.826 18:21:39 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:53.826 18:21:39 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:53.826 18:21:39 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.826 18:21:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.826 18:21:39 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.826 18:21:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.826 18:21:39 accel -- accel/accel.sh@72 -- # IFS== 00:06:53.826 18:21:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:53.826 18:21:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:53.826 18:21:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.826 18:21:39 accel -- accel/accel.sh@72 -- # IFS== 00:06:53.826 18:21:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:53.826 18:21:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:53.826 18:21:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.826 18:21:39 accel -- accel/accel.sh@72 -- # IFS== 00:06:53.826 18:21:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:53.826 18:21:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:53.827 18:21:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.827 18:21:39 accel -- accel/accel.sh@72 -- # IFS== 00:06:53.827 18:21:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:53.827 18:21:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:53.827 18:21:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.827 18:21:39 accel -- accel/accel.sh@72 -- # IFS== 00:06:53.827 18:21:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:53.827 18:21:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:53.827 18:21:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.827 18:21:39 accel -- accel/accel.sh@72 -- # IFS== 00:06:53.827 18:21:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:53.827 18:21:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:53.827 18:21:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.827 18:21:39 accel -- accel/accel.sh@72 -- # IFS== 00:06:53.827 18:21:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:53.827 18:21:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:53.827 18:21:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.827 18:21:39 accel -- accel/accel.sh@72 -- # IFS== 00:06:53.827 18:21:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:53.827 18:21:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:53.827 18:21:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.827 18:21:39 accel -- accel/accel.sh@72 -- # IFS== 00:06:53.827 18:21:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:53.827 18:21:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:53.827 18:21:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.827 18:21:39 accel -- accel/accel.sh@72 -- # IFS== 00:06:53.827 18:21:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:53.827 18:21:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:53.827 18:21:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.827 18:21:39 accel -- accel/accel.sh@72 -- # IFS== 00:06:53.827 18:21:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:53.827 18:21:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:53.827 18:21:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.827 18:21:39 accel -- accel/accel.sh@72 -- # IFS== 00:06:53.827 18:21:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:53.827 18:21:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:53.827 18:21:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.827 18:21:39 accel -- accel/accel.sh@72 -- # IFS== 00:06:53.827 18:21:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:53.827 18:21:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:53.827 18:21:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.827 18:21:39 accel -- accel/accel.sh@72 -- # IFS== 00:06:53.827 18:21:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:53.827 18:21:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:53.827 18:21:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.827 18:21:39 accel -- accel/accel.sh@72 -- # IFS== 00:06:53.827 18:21:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:53.827 18:21:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:53.827 18:21:39 accel -- accel/accel.sh@75 -- # killprocess 2722921 00:06:53.827 18:21:39 accel -- common/autotest_common.sh@948 -- # '[' -z 2722921 ']' 00:06:53.827 18:21:39 accel -- common/autotest_common.sh@952 -- # kill -0 2722921 00:06:53.827 18:21:39 accel -- common/autotest_common.sh@953 -- # uname 00:06:53.827 18:21:39 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:53.827 18:21:39 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2722921 00:06:53.827 18:21:39 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:53.827 18:21:39 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:53.827 18:21:39 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2722921' 00:06:53.827 killing process with pid 2722921 00:06:53.827 18:21:39 accel -- common/autotest_common.sh@967 -- # kill 2722921 00:06:53.827 18:21:39 accel -- common/autotest_common.sh@972 -- # wait 2722921 00:06:54.086 18:21:39 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:54.086 18:21:39 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:54.086 18:21:39 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:54.086 18:21:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.086 18:21:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.086 18:21:39 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:54.086 18:21:39 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:54.086 18:21:39 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:54.086 18:21:39 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.086 18:21:39 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.086 18:21:39 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.086 18:21:39 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.086 18:21:39 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.086 18:21:39 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:54.086 18:21:39 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:54.086 18:21:39 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.086 18:21:39 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:54.086 18:21:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:54.086 18:21:39 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:54.086 18:21:39 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:54.086 18:21:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.086 18:21:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.344 ************************************ 00:06:54.344 START TEST accel_missing_filename 00:06:54.344 ************************************ 00:06:54.344 18:21:39 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:54.344 18:21:39 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:54.344 18:21:39 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:54.344 18:21:39 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:54.344 18:21:39 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.344 18:21:39 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:54.344 18:21:39 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.344 18:21:39 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:54.344 18:21:39 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:54.344 18:21:39 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:54.344 18:21:39 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.344 18:21:39 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.344 18:21:39 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.344 18:21:39 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.344 18:21:39 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.344 18:21:39 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:54.344 18:21:39 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:54.344 [2024-07-15 18:21:39.687110] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:54.344 [2024-07-15 18:21:39.687214] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2723226 ] 00:06:54.344 [2024-07-15 18:21:39.824069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.603 [2024-07-15 18:21:39.922779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.603 [2024-07-15 18:21:39.982570] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:54.603 [2024-07-15 18:21:40.047728] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:54.603 A filename is required. 00:06:54.603 18:21:40 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:54.603 18:21:40 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:54.603 18:21:40 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:54.603 18:21:40 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:54.603 18:21:40 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:54.603 18:21:40 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:54.603 00:06:54.603 real 0m0.488s 00:06:54.603 user 0m0.329s 00:06:54.603 sys 0m0.195s 00:06:54.603 18:21:40 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.603 18:21:40 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:54.603 ************************************ 00:06:54.603 END TEST accel_missing_filename 00:06:54.603 ************************************ 00:06:54.862 18:21:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:54.862 18:21:40 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:54.862 18:21:40 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:54.862 18:21:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.862 18:21:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.862 ************************************ 00:06:54.862 START TEST accel_compress_verify 00:06:54.862 ************************************ 00:06:54.862 18:21:40 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:54.862 18:21:40 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:54.862 18:21:40 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:54.862 18:21:40 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:54.862 18:21:40 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.862 18:21:40 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:54.862 18:21:40 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.862 18:21:40 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:54.862 18:21:40 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:54.862 18:21:40 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:54.862 18:21:40 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.862 18:21:40 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.862 18:21:40 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.862 18:21:40 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.862 18:21:40 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.862 18:21:40 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:54.862 18:21:40 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:54.862 [2024-07-15 18:21:40.229748] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:54.862 [2024-07-15 18:21:40.229806] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2723256 ] 00:06:54.862 [2024-07-15 18:21:40.331106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.120 [2024-07-15 18:21:40.424211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.120 [2024-07-15 18:21:40.479160] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:55.121 [2024-07-15 18:21:40.543183] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:55.121 00:06:55.121 Compression does not support the verify option, aborting. 00:06:55.121 18:21:40 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:55.121 18:21:40 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:55.121 18:21:40 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:55.121 18:21:40 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:55.121 18:21:40 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:55.121 18:21:40 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:55.121 00:06:55.121 real 0m0.433s 00:06:55.121 user 0m0.316s 00:06:55.121 sys 0m0.154s 00:06:55.121 18:21:40 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.121 18:21:40 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:55.121 ************************************ 00:06:55.121 END TEST accel_compress_verify 00:06:55.121 ************************************ 00:06:55.121 18:21:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:55.121 18:21:40 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:55.121 18:21:40 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:55.121 18:21:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.121 18:21:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.380 ************************************ 00:06:55.380 START TEST accel_wrong_workload 00:06:55.380 ************************************ 00:06:55.380 18:21:40 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:55.380 18:21:40 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:55.380 18:21:40 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:55.380 18:21:40 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:55.380 18:21:40 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.380 18:21:40 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:55.380 18:21:40 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.380 18:21:40 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:55.380 18:21:40 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:55.380 18:21:40 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:55.380 18:21:40 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.380 18:21:40 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.380 18:21:40 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.380 18:21:40 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.380 18:21:40 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.380 18:21:40 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:55.380 18:21:40 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:55.380 Unsupported workload type: foobar 00:06:55.380 [2024-07-15 18:21:40.725103] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:55.380 accel_perf options: 00:06:55.380 [-h help message] 00:06:55.380 [-q queue depth per core] 00:06:55.380 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:55.380 [-T number of threads per core 00:06:55.380 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:55.380 [-t time in seconds] 00:06:55.380 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:55.380 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:55.380 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:55.380 [-l for compress/decompress workloads, name of uncompressed input file 00:06:55.380 [-S for crc32c workload, use this seed value (default 0) 00:06:55.380 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:55.380 [-f for fill workload, use this BYTE value (default 255) 00:06:55.380 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:55.380 [-y verify result if this switch is on] 00:06:55.380 [-a tasks to allocate per core (default: same value as -q)] 00:06:55.380 Can be used to spread operations across a wider range of memory. 00:06:55.380 18:21:40 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:55.380 18:21:40 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:55.380 18:21:40 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:55.380 18:21:40 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:55.380 00:06:55.380 real 0m0.040s 00:06:55.380 user 0m0.027s 00:06:55.380 sys 0m0.012s 00:06:55.380 18:21:40 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.380 18:21:40 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:55.380 ************************************ 00:06:55.381 END TEST accel_wrong_workload 00:06:55.381 ************************************ 00:06:55.381 Error: writing output failed: Broken pipe 00:06:55.381 18:21:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:55.381 18:21:40 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:55.381 18:21:40 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:55.381 18:21:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.381 18:21:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.381 ************************************ 00:06:55.381 START TEST accel_negative_buffers 00:06:55.381 ************************************ 00:06:55.381 18:21:40 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:55.381 18:21:40 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:55.381 18:21:40 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:55.381 18:21:40 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:55.381 18:21:40 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.381 18:21:40 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:55.381 18:21:40 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.381 18:21:40 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:55.381 18:21:40 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:55.381 18:21:40 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:55.381 18:21:40 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.381 18:21:40 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.381 18:21:40 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.381 18:21:40 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.381 18:21:40 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.381 18:21:40 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:55.381 18:21:40 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:55.381 -x option must be non-negative. 00:06:55.381 [2024-07-15 18:21:40.828780] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:55.381 accel_perf options: 00:06:55.381 [-h help message] 00:06:55.381 [-q queue depth per core] 00:06:55.381 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:55.381 [-T number of threads per core 00:06:55.381 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:55.381 [-t time in seconds] 00:06:55.381 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:55.381 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:55.381 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:55.381 [-l for compress/decompress workloads, name of uncompressed input file 00:06:55.381 [-S for crc32c workload, use this seed value (default 0) 00:06:55.381 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:55.381 [-f for fill workload, use this BYTE value (default 255) 00:06:55.381 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:55.381 [-y verify result if this switch is on] 00:06:55.381 [-a tasks to allocate per core (default: same value as -q)] 00:06:55.381 Can be used to spread operations across a wider range of memory. 00:06:55.381 18:21:40 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:55.381 18:21:40 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:55.381 18:21:40 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:55.381 18:21:40 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:55.381 00:06:55.381 real 0m0.037s 00:06:55.381 user 0m0.025s 00:06:55.381 sys 0m0.012s 00:06:55.381 18:21:40 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.381 18:21:40 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:55.381 ************************************ 00:06:55.381 END TEST accel_negative_buffers 00:06:55.381 ************************************ 00:06:55.381 Error: writing output failed: Broken pipe 00:06:55.381 18:21:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:55.381 18:21:40 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:55.381 18:21:40 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:55.381 18:21:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.381 18:21:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.381 ************************************ 00:06:55.381 START TEST accel_crc32c 00:06:55.381 ************************************ 00:06:55.381 18:21:40 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:55.381 18:21:40 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:55.381 18:21:40 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:55.381 18:21:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.381 18:21:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.381 18:21:40 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:55.381 18:21:40 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:55.381 18:21:40 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:55.381 18:21:40 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.381 18:21:40 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.381 18:21:40 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.381 18:21:40 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.381 18:21:40 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.381 18:21:40 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:55.381 18:21:40 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:55.381 [2024-07-15 18:21:40.930791] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:55.381 [2024-07-15 18:21:40.930843] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2723524 ] 00:06:55.640 [2024-07-15 18:21:41.030413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.640 [2024-07-15 18:21:41.121707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.640 18:21:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.640 18:21:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.640 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.640 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.640 18:21:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.640 18:21:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.640 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.640 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.640 18:21:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:55.640 18:21:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.640 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.640 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.640 18:21:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.640 18:21:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.640 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.640 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.640 18:21:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.641 18:21:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.641 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.641 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.641 18:21:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:55.641 18:21:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.641 18:21:41 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:55.641 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.641 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.641 18:21:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:55.641 18:21:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.641 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.641 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.641 18:21:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.641 18:21:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.641 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.641 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.641 18:21:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.641 18:21:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.641 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.641 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.641 18:21:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:55.641 18:21:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.641 18:21:41 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:55.641 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.641 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.641 18:21:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:55.641 18:21:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.641 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.641 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.899 18:21:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:55.899 18:21:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.899 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.899 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.899 18:21:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:55.899 18:21:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.899 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.899 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.899 18:21:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.899 18:21:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.899 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.899 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.899 18:21:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:55.899 18:21:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.899 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.899 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.899 18:21:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.899 18:21:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.899 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.899 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.899 18:21:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.899 18:21:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.899 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.899 18:21:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.833 18:21:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.833 18:21:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.833 18:21:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.833 18:21:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.833 18:21:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.833 18:21:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.833 18:21:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.833 18:21:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.833 18:21:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.833 18:21:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.833 18:21:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.833 18:21:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.833 18:21:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.833 18:21:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.833 18:21:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.833 18:21:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.833 18:21:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.833 18:21:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.833 18:21:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.833 18:21:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.833 18:21:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.833 18:21:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.833 18:21:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.833 18:21:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.833 18:21:42 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.833 18:21:42 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:56.833 18:21:42 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.833 00:06:56.833 real 0m1.436s 00:06:56.833 user 0m1.291s 00:06:56.833 sys 0m0.151s 00:06:56.833 18:21:42 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.833 18:21:42 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:56.833 ************************************ 00:06:56.833 END TEST accel_crc32c 00:06:56.833 ************************************ 00:06:56.833 18:21:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:56.833 18:21:42 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:56.833 18:21:42 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:56.833 18:21:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.833 18:21:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.091 ************************************ 00:06:57.091 START TEST accel_crc32c_C2 00:06:57.091 ************************************ 00:06:57.091 18:21:42 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:57.091 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:57.091 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:57.091 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.091 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.091 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:57.091 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:57.091 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.091 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.091 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.091 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.091 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.091 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.091 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:57.091 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:57.091 [2024-07-15 18:21:42.439709] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:57.091 [2024-07-15 18:21:42.439765] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2723759 ] 00:06:57.091 [2024-07-15 18:21:42.539132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.091 [2024-07-15 18:21:42.632580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.348 18:21:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.717 18:21:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.717 18:21:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.717 18:21:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.717 18:21:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.717 18:21:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.717 18:21:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.717 18:21:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.717 18:21:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.717 18:21:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.717 18:21:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.717 18:21:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.717 18:21:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.717 18:21:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.717 18:21:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.717 18:21:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.717 18:21:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.717 18:21:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.717 18:21:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.717 18:21:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.717 18:21:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.717 18:21:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.717 18:21:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.717 18:21:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.717 18:21:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.717 18:21:43 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.717 18:21:43 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:58.717 18:21:43 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.717 00:06:58.717 real 0m1.440s 00:06:58.717 user 0m1.300s 00:06:58.717 sys 0m0.145s 00:06:58.717 18:21:43 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.717 18:21:43 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:58.717 ************************************ 00:06:58.717 END TEST accel_crc32c_C2 00:06:58.717 ************************************ 00:06:58.717 18:21:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.717 18:21:43 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:58.718 18:21:43 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:58.718 18:21:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.718 18:21:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.718 ************************************ 00:06:58.718 START TEST accel_copy 00:06:58.718 ************************************ 00:06:58.718 18:21:43 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:58.718 18:21:43 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:58.718 18:21:43 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:58.718 18:21:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.718 18:21:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.718 18:21:43 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:58.718 18:21:43 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:58.718 18:21:43 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:58.718 18:21:43 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.718 18:21:43 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.718 18:21:43 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.718 18:21:43 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.718 18:21:43 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.718 18:21:43 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:58.718 18:21:43 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:58.718 [2024-07-15 18:21:43.946108] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:58.718 [2024-07-15 18:21:43.946166] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2723994 ] 00:06:58.718 [2024-07-15 18:21:44.044682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.718 [2024-07-15 18:21:44.135817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.718 18:21:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.095 18:21:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:00.095 18:21:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.095 18:21:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.095 18:21:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.095 18:21:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:00.095 18:21:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.095 18:21:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.095 18:21:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.095 18:21:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:00.095 18:21:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.095 18:21:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.095 18:21:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.095 18:21:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:00.095 18:21:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.095 18:21:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.095 18:21:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.095 18:21:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:00.095 18:21:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.095 18:21:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.095 18:21:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.095 18:21:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:00.095 18:21:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.095 18:21:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.095 18:21:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.095 18:21:45 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.095 18:21:45 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:00.095 18:21:45 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.095 00:07:00.095 real 0m1.440s 00:07:00.095 user 0m1.285s 00:07:00.095 sys 0m0.155s 00:07:00.095 18:21:45 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.095 18:21:45 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:00.095 ************************************ 00:07:00.095 END TEST accel_copy 00:07:00.095 ************************************ 00:07:00.095 18:21:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:00.095 18:21:45 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:00.095 18:21:45 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:00.095 18:21:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.095 18:21:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.095 ************************************ 00:07:00.095 START TEST accel_fill 00:07:00.095 ************************************ 00:07:00.095 18:21:45 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:00.095 18:21:45 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:00.095 18:21:45 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:00.095 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.095 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.095 18:21:45 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:00.095 18:21:45 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:00.095 18:21:45 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:00.095 18:21:45 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.095 18:21:45 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.095 18:21:45 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.095 18:21:45 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.095 18:21:45 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.095 18:21:45 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:00.095 18:21:45 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:00.095 [2024-07-15 18:21:45.453328] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:00.095 [2024-07-15 18:21:45.453387] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2724232 ] 00:07:00.095 [2024-07-15 18:21:45.551883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.095 [2024-07-15 18:21:45.643386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.354 18:21:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.328 18:21:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.328 18:21:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.328 18:21:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.328 18:21:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.328 18:21:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.328 18:21:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.328 18:21:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.328 18:21:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.328 18:21:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.328 18:21:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.328 18:21:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.328 18:21:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.328 18:21:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.328 18:21:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.328 18:21:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.328 18:21:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.328 18:21:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.328 18:21:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.328 18:21:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.328 18:21:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.328 18:21:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.328 18:21:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.328 18:21:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.328 18:21:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.328 18:21:46 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.328 18:21:46 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:01.328 18:21:46 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.328 00:07:01.328 real 0m1.435s 00:07:01.328 user 0m1.292s 00:07:01.328 sys 0m0.148s 00:07:01.328 18:21:46 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.328 18:21:46 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:01.328 ************************************ 00:07:01.328 END TEST accel_fill 00:07:01.328 ************************************ 00:07:01.594 18:21:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:01.594 18:21:46 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:01.594 18:21:46 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:01.594 18:21:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.594 18:21:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.594 ************************************ 00:07:01.594 START TEST accel_copy_crc32c 00:07:01.594 ************************************ 00:07:01.594 18:21:46 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:01.594 18:21:46 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:01.594 18:21:46 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:01.594 18:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.594 18:21:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.594 18:21:46 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:01.594 18:21:46 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:01.594 18:21:46 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:01.594 18:21:46 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.594 18:21:46 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.594 18:21:46 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.594 18:21:46 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.594 18:21:46 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.594 18:21:46 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:01.594 18:21:46 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:01.594 [2024-07-15 18:21:46.957500] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:01.594 [2024-07-15 18:21:46.957554] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2724471 ] 00:07:01.594 [2024-07-15 18:21:47.056320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.853 [2024-07-15 18:21:47.148571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.853 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.854 18:21:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.230 18:21:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.230 18:21:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.230 18:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.230 18:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.230 18:21:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.230 18:21:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.230 18:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.230 18:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.230 18:21:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.230 18:21:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.230 18:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.230 18:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.230 18:21:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.230 18:21:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.230 18:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.230 18:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.230 18:21:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.230 18:21:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.230 18:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.230 18:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.230 18:21:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.230 18:21:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.230 18:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.230 18:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.230 18:21:48 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.230 18:21:48 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:03.230 18:21:48 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.230 00:07:03.230 real 0m1.438s 00:07:03.230 user 0m1.292s 00:07:03.230 sys 0m0.153s 00:07:03.230 18:21:48 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.230 18:21:48 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:03.230 ************************************ 00:07:03.230 END TEST accel_copy_crc32c 00:07:03.230 ************************************ 00:07:03.230 18:21:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:03.230 18:21:48 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:03.230 18:21:48 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:03.230 18:21:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.230 18:21:48 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.230 ************************************ 00:07:03.230 START TEST accel_copy_crc32c_C2 00:07:03.230 ************************************ 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:03.230 [2024-07-15 18:21:48.464063] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:03.230 [2024-07-15 18:21:48.464118] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2724709 ] 00:07:03.230 [2024-07-15 18:21:48.563452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.230 [2024-07-15 18:21:48.655430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.230 18:21:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.605 18:21:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.605 18:21:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.605 18:21:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.605 18:21:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.605 18:21:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.605 18:21:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.605 18:21:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.605 18:21:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.605 18:21:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.605 18:21:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.605 18:21:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.605 18:21:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.605 18:21:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.605 18:21:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.605 18:21:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.605 18:21:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.605 18:21:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.605 18:21:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.605 18:21:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.605 18:21:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.605 18:21:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.605 18:21:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.605 18:21:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.605 18:21:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.605 18:21:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.605 18:21:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:04.605 18:21:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.605 00:07:04.605 real 0m1.450s 00:07:04.605 user 0m1.289s 00:07:04.605 sys 0m0.157s 00:07:04.605 18:21:49 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.605 18:21:49 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:04.605 ************************************ 00:07:04.605 END TEST accel_copy_crc32c_C2 00:07:04.605 ************************************ 00:07:04.605 18:21:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:04.605 18:21:49 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:04.605 18:21:49 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:04.605 18:21:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.605 18:21:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.605 ************************************ 00:07:04.605 START TEST accel_dualcast 00:07:04.605 ************************************ 00:07:04.605 18:21:49 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:04.605 18:21:49 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:04.605 18:21:49 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:04.605 18:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.605 18:21:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.605 18:21:49 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:04.605 18:21:49 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:04.605 18:21:49 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:04.605 18:21:49 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.605 18:21:49 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.605 18:21:49 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.605 18:21:49 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.605 18:21:49 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.605 18:21:49 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:04.605 18:21:49 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:04.605 [2024-07-15 18:21:49.982535] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:04.605 [2024-07-15 18:21:49.982599] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2724946 ] 00:07:04.605 [2024-07-15 18:21:50.083389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.864 [2024-07-15 18:21:50.175822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.864 18:21:50 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:04.865 18:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.865 18:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.865 18:21:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:04.865 18:21:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.865 18:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.865 18:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.865 18:21:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:04.865 18:21:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.865 18:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.865 18:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.865 18:21:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:04.865 18:21:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.865 18:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.865 18:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.865 18:21:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.865 18:21:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.865 18:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.865 18:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.865 18:21:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:04.865 18:21:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.865 18:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.865 18:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.865 18:21:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:04.865 18:21:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.865 18:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.865 18:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.865 18:21:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:04.865 18:21:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.865 18:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.865 18:21:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.243 18:21:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.243 18:21:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.243 18:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.243 18:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.243 18:21:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.243 18:21:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.243 18:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.243 18:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.243 18:21:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.243 18:21:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.243 18:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.243 18:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.243 18:21:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.243 18:21:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.243 18:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.243 18:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.243 18:21:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.243 18:21:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.243 18:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.243 18:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.243 18:21:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.243 18:21:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.243 18:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.243 18:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.243 18:21:51 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.243 18:21:51 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:06.243 18:21:51 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.243 00:07:06.243 real 0m1.447s 00:07:06.243 user 0m1.290s 00:07:06.243 sys 0m0.153s 00:07:06.243 18:21:51 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.243 18:21:51 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:06.243 ************************************ 00:07:06.243 END TEST accel_dualcast 00:07:06.243 ************************************ 00:07:06.243 18:21:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:06.243 18:21:51 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:06.243 18:21:51 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:06.243 18:21:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.243 18:21:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.243 ************************************ 00:07:06.243 START TEST accel_compare 00:07:06.243 ************************************ 00:07:06.243 18:21:51 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:06.243 [2024-07-15 18:21:51.493376] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:06.243 [2024-07-15 18:21:51.493430] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2725244 ] 00:07:06.243 [2024-07-15 18:21:51.589660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.243 [2024-07-15 18:21:51.680742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.243 18:21:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.621 18:21:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.621 18:21:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.621 18:21:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.621 18:21:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.621 18:21:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.621 18:21:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.621 18:21:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.621 18:21:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.621 18:21:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.621 18:21:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.621 18:21:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.621 18:21:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.621 18:21:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.621 18:21:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.621 18:21:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.621 18:21:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.621 18:21:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.621 18:21:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.621 18:21:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.621 18:21:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.621 18:21:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.621 18:21:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.621 18:21:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.621 18:21:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.621 18:21:52 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.621 18:21:52 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:07.621 18:21:52 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.621 00:07:07.621 real 0m1.437s 00:07:07.621 user 0m1.289s 00:07:07.621 sys 0m0.146s 00:07:07.621 18:21:52 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.621 18:21:52 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:07.621 ************************************ 00:07:07.621 END TEST accel_compare 00:07:07.621 ************************************ 00:07:07.621 18:21:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.621 18:21:52 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:07.621 18:21:52 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:07.621 18:21:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.621 18:21:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.621 ************************************ 00:07:07.621 START TEST accel_xor 00:07:07.621 ************************************ 00:07:07.621 18:21:52 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:07.621 18:21:52 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:07.621 18:21:52 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:07.621 18:21:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.621 18:21:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.621 18:21:52 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:07.621 18:21:52 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:07.621 18:21:52 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:07.621 18:21:52 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.621 18:21:52 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.621 18:21:52 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.621 18:21:52 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.621 18:21:52 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.621 18:21:52 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:07.621 18:21:52 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:07.621 [2024-07-15 18:21:53.010759] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:07.621 [2024-07-15 18:21:53.010859] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2725566 ] 00:07:07.621 [2024-07-15 18:21:53.146935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.885 [2024-07-15 18:21:53.239110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.885 18:21:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.261 18:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.261 18:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.261 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.261 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.261 18:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.261 18:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.261 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.261 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.261 18:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.261 18:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.261 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.261 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.261 18:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.261 18:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.261 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.261 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.261 18:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.261 18:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.262 00:07:09.262 real 0m1.481s 00:07:09.262 user 0m1.304s 00:07:09.262 sys 0m0.182s 00:07:09.262 18:21:54 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.262 18:21:54 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:09.262 ************************************ 00:07:09.262 END TEST accel_xor 00:07:09.262 ************************************ 00:07:09.262 18:21:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:09.262 18:21:54 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:09.262 18:21:54 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:09.262 18:21:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.262 18:21:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.262 ************************************ 00:07:09.262 START TEST accel_xor 00:07:09.262 ************************************ 00:07:09.262 18:21:54 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:09.262 [2024-07-15 18:21:54.548645] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:09.262 [2024-07-15 18:21:54.548702] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2725862 ] 00:07:09.262 [2024-07-15 18:21:54.645908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.262 [2024-07-15 18:21:54.737174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.262 18:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.640 18:21:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.640 18:21:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.640 18:21:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.640 18:21:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.640 18:21:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.640 18:21:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.640 18:21:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.640 18:21:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.640 18:21:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.640 18:21:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.640 18:21:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.640 18:21:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.640 18:21:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.640 18:21:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.640 18:21:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.640 18:21:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.640 18:21:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.640 18:21:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.640 18:21:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.640 18:21:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.640 18:21:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.640 18:21:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.640 18:21:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.640 18:21:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.640 18:21:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.640 18:21:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:10.640 18:21:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.640 00:07:10.640 real 0m1.429s 00:07:10.640 user 0m1.291s 00:07:10.640 sys 0m0.143s 00:07:10.640 18:21:55 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.640 18:21:55 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:10.640 ************************************ 00:07:10.640 END TEST accel_xor 00:07:10.640 ************************************ 00:07:10.640 18:21:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:10.640 18:21:55 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:10.640 18:21:55 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:10.640 18:21:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.640 18:21:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.640 ************************************ 00:07:10.640 START TEST accel_dif_verify 00:07:10.640 ************************************ 00:07:10.640 18:21:56 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:10.640 18:21:56 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:10.640 18:21:56 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:10.640 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.640 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.640 18:21:56 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:10.640 18:21:56 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:10.640 18:21:56 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:10.640 18:21:56 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.640 18:21:56 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.640 18:21:56 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.640 18:21:56 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.640 18:21:56 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.640 18:21:56 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:10.640 18:21:56 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:10.640 [2024-07-15 18:21:56.043483] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:10.640 [2024-07-15 18:21:56.043541] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2726096 ] 00:07:10.640 [2024-07-15 18:21:56.140615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.899 [2024-07-15 18:21:56.232407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.900 18:21:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.277 18:21:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.277 18:21:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.277 18:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.277 18:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.277 18:21:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.277 18:21:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.277 18:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.277 18:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.277 18:21:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.277 18:21:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.277 18:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.277 18:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.277 18:21:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.277 18:21:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.277 18:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.277 18:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.277 18:21:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.277 18:21:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.277 18:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.277 18:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.277 18:21:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.277 18:21:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.277 18:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.277 18:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.277 18:21:57 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.277 18:21:57 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:12.277 18:21:57 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.277 00:07:12.277 real 0m1.434s 00:07:12.277 user 0m1.291s 00:07:12.277 sys 0m0.149s 00:07:12.277 18:21:57 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.277 18:21:57 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:12.277 ************************************ 00:07:12.277 END TEST accel_dif_verify 00:07:12.277 ************************************ 00:07:12.277 18:21:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:12.277 18:21:57 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:12.277 18:21:57 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:12.277 18:21:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.277 18:21:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.277 ************************************ 00:07:12.277 START TEST accel_dif_generate 00:07:12.277 ************************************ 00:07:12.277 18:21:57 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:12.277 [2024-07-15 18:21:57.546360] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:12.277 [2024-07-15 18:21:57.546416] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2726336 ] 00:07:12.277 [2024-07-15 18:21:57.645641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.277 [2024-07-15 18:21:57.735525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:12.277 18:21:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.278 18:21:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.655 18:21:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:13.655 18:21:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.655 18:21:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.655 18:21:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.655 18:21:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:13.655 18:21:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.655 18:21:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.655 18:21:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.655 18:21:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:13.655 18:21:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.655 18:21:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.655 18:21:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.655 18:21:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:13.655 18:21:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.655 18:21:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.655 18:21:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.655 18:21:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:13.655 18:21:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.655 18:21:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.655 18:21:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.655 18:21:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:13.656 18:21:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.656 18:21:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.656 18:21:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.656 18:21:58 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.656 18:21:58 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:13.656 18:21:58 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.656 00:07:13.656 real 0m1.437s 00:07:13.656 user 0m1.296s 00:07:13.656 sys 0m0.149s 00:07:13.656 18:21:58 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.656 18:21:58 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:13.656 ************************************ 00:07:13.656 END TEST accel_dif_generate 00:07:13.656 ************************************ 00:07:13.656 18:21:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:13.656 18:21:58 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:13.656 18:21:58 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:13.656 18:21:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.656 18:21:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.656 ************************************ 00:07:13.656 START TEST accel_dif_generate_copy 00:07:13.656 ************************************ 00:07:13.656 18:21:59 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:13.656 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:13.656 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:13.656 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.656 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.656 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:13.656 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:13.656 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:13.656 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.656 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.656 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.656 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.656 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.656 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:13.656 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:13.656 [2024-07-15 18:21:59.054884] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:13.656 [2024-07-15 18:21:59.054998] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2726571 ] 00:07:13.656 [2024-07-15 18:21:59.192130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.915 [2024-07-15 18:21:59.291309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.916 18:21:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.295 18:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.295 18:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.295 18:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.295 18:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.295 18:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.295 18:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.295 18:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.295 18:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.295 18:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.295 18:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.295 18:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.295 18:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.295 18:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.295 18:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.295 18:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.295 18:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.295 18:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.295 18:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.295 18:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.295 18:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.295 18:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.295 18:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.295 18:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.295 18:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.295 18:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.295 18:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:15.295 18:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.295 00:07:15.295 real 0m1.494s 00:07:15.295 user 0m1.308s 00:07:15.295 sys 0m0.191s 00:07:15.295 18:22:00 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.295 18:22:00 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:15.295 ************************************ 00:07:15.295 END TEST accel_dif_generate_copy 00:07:15.295 ************************************ 00:07:15.295 18:22:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:15.295 18:22:00 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:15.295 18:22:00 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:15.295 18:22:00 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:15.295 18:22:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.295 18:22:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.295 ************************************ 00:07:15.295 START TEST accel_comp 00:07:15.295 ************************************ 00:07:15.295 18:22:00 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:15.295 18:22:00 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:15.295 18:22:00 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:15.295 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.295 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.295 18:22:00 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:15.295 18:22:00 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:15.295 18:22:00 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:15.295 18:22:00 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.295 18:22:00 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.295 18:22:00 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.295 18:22:00 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.295 18:22:00 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.295 18:22:00 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:15.295 18:22:00 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:15.295 [2024-07-15 18:22:00.607850] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:15.295 [2024-07-15 18:22:00.607907] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2726809 ] 00:07:15.295 [2024-07-15 18:22:00.706556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.295 [2024-07-15 18:22:00.797773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.555 18:22:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.556 18:22:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.556 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.556 18:22:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.491 18:22:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.491 18:22:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.491 18:22:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.491 18:22:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.491 18:22:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.491 18:22:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.491 18:22:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.491 18:22:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.491 18:22:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.491 18:22:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.491 18:22:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.491 18:22:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.491 18:22:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.491 18:22:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.491 18:22:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.491 18:22:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.491 18:22:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.491 18:22:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.491 18:22:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.491 18:22:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.491 18:22:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.491 18:22:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.491 18:22:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.491 18:22:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.491 18:22:02 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.491 18:22:02 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:16.491 18:22:02 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.491 00:07:16.491 real 0m1.446s 00:07:16.491 user 0m1.294s 00:07:16.491 sys 0m0.152s 00:07:16.491 18:22:02 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.491 18:22:02 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:16.491 ************************************ 00:07:16.491 END TEST accel_comp 00:07:16.491 ************************************ 00:07:16.750 18:22:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:16.750 18:22:02 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:16.750 18:22:02 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:16.750 18:22:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.750 18:22:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.750 ************************************ 00:07:16.750 START TEST accel_decomp 00:07:16.750 ************************************ 00:07:16.750 18:22:02 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:16.750 18:22:02 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:16.750 18:22:02 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:16.750 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.750 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.750 18:22:02 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:16.750 18:22:02 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:16.750 18:22:02 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:16.750 18:22:02 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.750 18:22:02 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.750 18:22:02 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.750 18:22:02 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.750 18:22:02 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.750 18:22:02 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:16.750 18:22:02 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:16.750 [2024-07-15 18:22:02.122780] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:16.750 [2024-07-15 18:22:02.122834] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2727046 ] 00:07:16.750 [2024-07-15 18:22:02.224882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.009 [2024-07-15 18:22:02.317806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:17.009 18:22:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.010 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.010 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.010 18:22:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:17.010 18:22:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.010 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.010 18:22:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.385 18:22:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.385 18:22:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.385 18:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.385 18:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.385 18:22:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.385 18:22:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.385 18:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.385 18:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.385 18:22:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.385 18:22:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.385 18:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.385 18:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.385 18:22:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.385 18:22:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.385 18:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.385 18:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.385 18:22:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.385 18:22:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.385 18:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.385 18:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.385 18:22:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.385 18:22:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.385 18:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.385 18:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.385 18:22:03 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.385 18:22:03 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:18.385 18:22:03 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.385 00:07:18.385 real 0m1.453s 00:07:18.385 user 0m1.299s 00:07:18.385 sys 0m0.153s 00:07:18.385 18:22:03 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.385 18:22:03 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:18.385 ************************************ 00:07:18.385 END TEST accel_decomp 00:07:18.385 ************************************ 00:07:18.385 18:22:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:18.385 18:22:03 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:18.385 18:22:03 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:18.385 18:22:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.385 18:22:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.385 ************************************ 00:07:18.385 START TEST accel_decomp_full 00:07:18.386 ************************************ 00:07:18.386 18:22:03 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:18.386 [2024-07-15 18:22:03.647072] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:18.386 [2024-07-15 18:22:03.647129] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2727304 ] 00:07:18.386 [2024-07-15 18:22:03.745999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.386 [2024-07-15 18:22:03.837289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.386 18:22:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.762 18:22:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:19.762 18:22:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.762 18:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.762 18:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.762 18:22:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:19.762 18:22:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.762 18:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.762 18:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.762 18:22:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:19.762 18:22:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.762 18:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.762 18:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.762 18:22:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:19.762 18:22:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.762 18:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.762 18:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.762 18:22:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:19.762 18:22:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.762 18:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.762 18:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.762 18:22:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:19.762 18:22:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.762 18:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.762 18:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.762 18:22:05 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.762 18:22:05 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:19.762 18:22:05 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.762 00:07:19.762 real 0m1.456s 00:07:19.762 user 0m1.298s 00:07:19.762 sys 0m0.161s 00:07:19.762 18:22:05 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.762 18:22:05 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:19.762 ************************************ 00:07:19.762 END TEST accel_decomp_full 00:07:19.762 ************************************ 00:07:19.762 18:22:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:19.762 18:22:05 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:19.762 18:22:05 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:19.762 18:22:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.762 18:22:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.762 ************************************ 00:07:19.762 START TEST accel_decomp_mcore 00:07:19.762 ************************************ 00:07:19.762 18:22:05 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:19.762 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:19.762 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:19.762 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.762 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.762 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:19.762 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:19.762 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:19.762 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.762 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.762 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.762 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.762 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.762 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:19.762 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:19.762 [2024-07-15 18:22:05.167423] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:19.762 [2024-07-15 18:22:05.167477] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2727639 ] 00:07:19.762 [2024-07-15 18:22:05.264324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:20.030 [2024-07-15 18:22:05.360259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.030 [2024-07-15 18:22:05.360363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.030 [2024-07-15 18:22:05.360523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.030 [2024-07-15 18:22:05.360523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.030 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.031 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.031 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.031 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.031 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.031 18:22:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.462 00:07:21.462 real 0m1.453s 00:07:21.462 user 0m4.704s 00:07:21.462 sys 0m0.161s 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.462 18:22:06 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:21.462 ************************************ 00:07:21.462 END TEST accel_decomp_mcore 00:07:21.462 ************************************ 00:07:21.462 18:22:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:21.462 18:22:06 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:21.462 18:22:06 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:21.462 18:22:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.462 18:22:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.462 ************************************ 00:07:21.462 START TEST accel_decomp_full_mcore 00:07:21.462 ************************************ 00:07:21.462 18:22:06 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:21.462 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:21.462 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:21.462 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.462 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.462 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:21.462 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:21.462 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:21.462 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.462 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.462 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.462 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.462 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.462 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:21.462 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:21.462 [2024-07-15 18:22:06.688933] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:21.463 [2024-07-15 18:22:06.688991] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2727967 ] 00:07:21.463 [2024-07-15 18:22:06.787804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:21.463 [2024-07-15 18:22:06.882437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.463 [2024-07-15 18:22:06.882543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.463 [2024-07-15 18:22:06.882670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.463 [2024-07-15 18:22:06.882671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.463 18:22:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.862 00:07:22.862 real 0m1.472s 00:07:22.862 user 0m4.781s 00:07:22.862 sys 0m0.158s 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.862 18:22:08 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:22.862 ************************************ 00:07:22.862 END TEST accel_decomp_full_mcore 00:07:22.862 ************************************ 00:07:22.862 18:22:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:22.862 18:22:08 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:22.862 18:22:08 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:22.862 18:22:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.862 18:22:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.862 ************************************ 00:07:22.862 START TEST accel_decomp_mthread 00:07:22.862 ************************************ 00:07:22.862 18:22:08 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:22.862 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:22.862 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:22.862 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.862 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.862 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:22.862 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:22.862 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:22.862 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.863 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.863 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.863 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.863 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.863 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:22.863 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:22.863 [2024-07-15 18:22:08.233401] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:22.863 [2024-07-15 18:22:08.233454] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2728205 ] 00:07:22.863 [2024-07-15 18:22:08.332820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.122 [2024-07-15 18:22:08.424753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.122 18:22:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.499 18:22:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.499 18:22:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.499 18:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.499 18:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.499 18:22:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.499 18:22:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.499 18:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.499 18:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.499 18:22:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.499 18:22:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.499 18:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.499 18:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.499 18:22:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.499 18:22:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.499 18:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.499 18:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.499 18:22:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.499 18:22:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.499 18:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.499 18:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.499 18:22:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.499 18:22:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.499 18:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.499 18:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.499 18:22:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.499 18:22:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.499 18:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.499 18:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.499 18:22:09 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.499 18:22:09 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:24.499 18:22:09 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.499 00:07:24.499 real 0m1.447s 00:07:24.499 user 0m1.299s 00:07:24.499 sys 0m0.154s 00:07:24.499 18:22:09 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.499 18:22:09 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:24.499 ************************************ 00:07:24.499 END TEST accel_decomp_mthread 00:07:24.499 ************************************ 00:07:24.499 18:22:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:24.499 18:22:09 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:24.499 18:22:09 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:24.499 18:22:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.499 18:22:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.499 ************************************ 00:07:24.499 START TEST accel_decomp_full_mthread 00:07:24.499 ************************************ 00:07:24.499 18:22:09 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:24.499 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:24.499 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:24.499 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.499 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.499 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:24.499 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:24.499 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:24.499 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.499 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.499 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.499 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.499 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.499 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:24.499 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:24.499 [2024-07-15 18:22:09.744072] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:24.499 [2024-07-15 18:22:09.744128] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2728448 ] 00:07:24.499 [2024-07-15 18:22:09.832197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.499 [2024-07-15 18:22:09.923915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.499 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.499 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.499 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.499 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.499 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.499 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.499 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.500 18:22:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.500 18:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.500 18:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.500 18:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.500 18:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.878 18:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.878 18:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.878 18:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.878 18:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.878 18:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.878 18:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.878 18:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.878 18:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.878 18:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.878 18:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.878 18:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.878 18:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.878 18:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.878 18:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.878 18:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.878 18:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.878 18:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.878 18:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.878 18:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.878 18:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.878 18:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.878 18:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.878 18:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.878 18:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.878 18:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.878 18:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.878 18:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.878 18:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.878 18:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.878 18:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:25.878 18:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.878 00:07:25.878 real 0m1.467s 00:07:25.878 user 0m1.322s 00:07:25.878 sys 0m0.150s 00:07:25.878 18:22:11 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.878 18:22:11 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:25.878 ************************************ 00:07:25.878 END TEST accel_decomp_full_mthread 00:07:25.878 ************************************ 00:07:25.878 18:22:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:25.878 18:22:11 accel -- accel/accel.sh@124 -- # [[ y == y ]] 00:07:25.878 18:22:11 accel -- accel/accel.sh@125 -- # COMPRESSDEV=1 00:07:25.878 18:22:11 accel -- accel/accel.sh@126 -- # get_expected_opcs 00:07:25.878 18:22:11 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:25.878 18:22:11 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2728677 00:07:25.878 18:22:11 accel -- accel/accel.sh@63 -- # waitforlisten 2728677 00:07:25.878 18:22:11 accel -- common/autotest_common.sh@829 -- # '[' -z 2728677 ']' 00:07:25.878 18:22:11 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:25.878 18:22:11 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.878 18:22:11 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:25.878 18:22:11 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.878 18:22:11 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:25.878 18:22:11 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.878 18:22:11 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.878 18:22:11 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.878 18:22:11 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.878 18:22:11 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:25.878 18:22:11 accel -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:25.878 18:22:11 accel -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:25.878 18:22:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.878 18:22:11 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:25.878 18:22:11 accel -- accel/accel.sh@41 -- # jq -r . 00:07:25.878 [2024-07-15 18:22:11.281594] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:25.878 [2024-07-15 18:22:11.281655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2728677 ] 00:07:25.878 [2024-07-15 18:22:11.386401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.137 [2024-07-15 18:22:11.480192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.705 [2024-07-15 18:22:12.065268] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:26.705 18:22:12 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:26.705 18:22:12 accel -- common/autotest_common.sh@862 -- # return 0 00:07:26.705 18:22:12 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:26.705 18:22:12 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:26.705 18:22:12 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:26.705 18:22:12 accel -- accel/accel.sh@68 -- # [[ -n 1 ]] 00:07:26.705 18:22:12 accel -- accel/accel.sh@68 -- # check_save_config compressdev_scan_accel_module 00:07:26.705 18:22:12 accel -- accel/accel.sh@56 -- # rpc_cmd save_config 00:07:26.705 18:22:12 accel -- accel/accel.sh@56 -- # jq -r '.subsystems[] | select(.subsystem=="accel").config[]' 00:07:26.705 18:22:12 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.705 18:22:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.705 18:22:12 accel -- accel/accel.sh@56 -- # grep compressdev_scan_accel_module 00:07:26.964 18:22:12 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.964 "method": "compressdev_scan_accel_module", 00:07:26.964 18:22:12 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:26.964 18:22:12 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:26.964 18:22:12 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:26.964 18:22:12 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.964 18:22:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.964 18:22:12 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.964 18:22:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.964 18:22:12 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.964 18:22:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.964 18:22:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.964 18:22:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.964 18:22:12 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.964 18:22:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.964 18:22:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.964 18:22:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.964 18:22:12 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.964 18:22:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.964 18:22:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.964 18:22:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.965 18:22:12 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.965 18:22:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.965 18:22:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.965 18:22:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.965 18:22:12 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.965 18:22:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.965 18:22:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.965 18:22:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.965 18:22:12 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.965 18:22:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.965 18:22:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.965 18:22:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.965 18:22:12 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.965 18:22:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.965 18:22:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:07:26.965 18:22:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.965 18:22:12 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.965 18:22:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.965 18:22:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:07:26.965 18:22:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.965 18:22:12 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.965 18:22:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.965 18:22:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.965 18:22:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.965 18:22:12 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.965 18:22:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.965 18:22:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.965 18:22:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.965 18:22:12 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.965 18:22:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.965 18:22:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.965 18:22:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.965 18:22:12 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.965 18:22:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.965 18:22:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.965 18:22:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.965 18:22:12 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.965 18:22:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.965 18:22:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.965 18:22:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.965 18:22:12 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.965 18:22:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.965 18:22:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.965 18:22:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.965 18:22:12 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.965 18:22:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.965 18:22:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.965 18:22:12 accel -- accel/accel.sh@75 -- # killprocess 2728677 00:07:26.965 18:22:12 accel -- common/autotest_common.sh@948 -- # '[' -z 2728677 ']' 00:07:26.965 18:22:12 accel -- common/autotest_common.sh@952 -- # kill -0 2728677 00:07:26.965 18:22:12 accel -- common/autotest_common.sh@953 -- # uname 00:07:26.965 18:22:12 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:26.965 18:22:12 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2728677 00:07:26.965 18:22:12 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:26.965 18:22:12 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:26.965 18:22:12 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2728677' 00:07:26.965 killing process with pid 2728677 00:07:26.965 18:22:12 accel -- common/autotest_common.sh@967 -- # kill 2728677 00:07:26.965 18:22:12 accel -- common/autotest_common.sh@972 -- # wait 2728677 00:07:27.533 18:22:12 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:27.533 18:22:12 accel -- accel/accel.sh@127 -- # run_test accel_cdev_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:27.533 18:22:12 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:27.533 18:22:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.533 18:22:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.533 ************************************ 00:07:27.533 START TEST accel_cdev_comp 00:07:27.533 ************************************ 00:07:27.533 18:22:12 accel.accel_cdev_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:27.533 18:22:12 accel.accel_cdev_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:27.533 18:22:12 accel.accel_cdev_comp -- accel/accel.sh@17 -- # local accel_module 00:07:27.533 18:22:12 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.533 18:22:12 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.533 18:22:12 accel.accel_cdev_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:27.533 18:22:12 accel.accel_cdev_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:27.533 18:22:12 accel.accel_cdev_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:27.533 18:22:12 accel.accel_cdev_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.533 18:22:12 accel.accel_cdev_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.533 18:22:12 accel.accel_cdev_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.533 18:22:12 accel.accel_cdev_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.533 18:22:12 accel.accel_cdev_comp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:27.533 18:22:12 accel.accel_cdev_comp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:27.533 18:22:12 accel.accel_cdev_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:27.533 18:22:12 accel.accel_cdev_comp -- accel/accel.sh@41 -- # jq -r . 00:07:27.533 [2024-07-15 18:22:12.895733] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:27.533 [2024-07-15 18:22:12.895790] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2728921 ] 00:07:27.533 [2024-07-15 18:22:12.993667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.533 [2024-07-15 18:22:13.084175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.102 [2024-07-15 18:22:13.640750] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:28.102 [2024-07-15 18:22:13.643045] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1a22a80 PMD being used: compress_qat 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:28.102 [2024-07-15 18:22:13.647035] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1a27840 PMD being used: compress_qat 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=0x1 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=compress 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.102 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=1 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=No 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.361 18:22:13 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.299 18:22:14 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:29.299 18:22:14 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.299 18:22:14 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.299 18:22:14 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.299 18:22:14 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:29.299 18:22:14 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.299 18:22:14 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.299 18:22:14 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.299 18:22:14 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:29.299 18:22:14 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.299 18:22:14 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.299 18:22:14 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.299 18:22:14 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:29.299 18:22:14 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.299 18:22:14 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.299 18:22:14 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.299 18:22:14 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:29.299 18:22:14 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.299 18:22:14 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.299 18:22:14 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.299 18:22:14 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:29.299 18:22:14 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.299 18:22:14 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.299 18:22:14 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.299 18:22:14 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:29.299 18:22:14 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:29.299 18:22:14 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:29.299 00:07:29.299 real 0m1.944s 00:07:29.299 user 0m1.558s 00:07:29.299 sys 0m0.383s 00:07:29.299 18:22:14 accel.accel_cdev_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.299 18:22:14 accel.accel_cdev_comp -- common/autotest_common.sh@10 -- # set +x 00:07:29.299 ************************************ 00:07:29.299 END TEST accel_cdev_comp 00:07:29.299 ************************************ 00:07:29.299 18:22:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:29.299 18:22:14 accel -- accel/accel.sh@128 -- # run_test accel_cdev_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:29.299 18:22:14 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:29.299 18:22:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.299 18:22:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.558 ************************************ 00:07:29.558 START TEST accel_cdev_decomp 00:07:29.558 ************************************ 00:07:29.558 18:22:14 accel.accel_cdev_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:29.558 18:22:14 accel.accel_cdev_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:29.558 18:22:14 accel.accel_cdev_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:29.558 18:22:14 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.558 18:22:14 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.558 18:22:14 accel.accel_cdev_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:29.558 18:22:14 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:29.559 18:22:14 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:29.559 18:22:14 accel.accel_cdev_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.559 18:22:14 accel.accel_cdev_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.559 18:22:14 accel.accel_cdev_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.559 18:22:14 accel.accel_cdev_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.559 18:22:14 accel.accel_cdev_decomp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:29.559 18:22:14 accel.accel_cdev_decomp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:29.559 18:22:14 accel.accel_cdev_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:29.559 18:22:14 accel.accel_cdev_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:29.559 [2024-07-15 18:22:14.909334] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:29.559 [2024-07-15 18:22:14.909385] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2729364 ] 00:07:29.559 [2024-07-15 18:22:15.006796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.559 [2024-07-15 18:22:15.097497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.127 [2024-07-15 18:22:15.652585] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:30.127 [2024-07-15 18:22:15.654852] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x180da80 PMD being used: compress_qat 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:30.127 [2024-07-15 18:22:15.658923] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1812840 PMD being used: compress_qat 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=1 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.127 18:22:15 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.504 18:22:16 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:31.504 18:22:16 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.504 18:22:16 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.504 18:22:16 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.504 18:22:16 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:31.504 18:22:16 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.504 18:22:16 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.504 18:22:16 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.504 18:22:16 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:31.504 18:22:16 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.504 18:22:16 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.504 18:22:16 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.504 18:22:16 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:31.504 18:22:16 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.504 18:22:16 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.504 18:22:16 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.504 18:22:16 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:31.504 18:22:16 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.504 18:22:16 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.504 18:22:16 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.504 18:22:16 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:31.504 18:22:16 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.504 18:22:16 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.504 18:22:16 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.504 18:22:16 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:31.504 18:22:16 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:31.504 18:22:16 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:31.504 00:07:31.504 real 0m1.944s 00:07:31.504 user 0m1.545s 00:07:31.504 sys 0m0.401s 00:07:31.504 18:22:16 accel.accel_cdev_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.504 18:22:16 accel.accel_cdev_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:31.504 ************************************ 00:07:31.504 END TEST accel_cdev_decomp 00:07:31.504 ************************************ 00:07:31.504 18:22:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:31.504 18:22:16 accel -- accel/accel.sh@129 -- # run_test accel_cdev_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:31.504 18:22:16 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:31.504 18:22:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.504 18:22:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.504 ************************************ 00:07:31.504 START TEST accel_cdev_decomp_full 00:07:31.504 ************************************ 00:07:31.504 18:22:16 accel.accel_cdev_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:31.504 18:22:16 accel.accel_cdev_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:31.504 18:22:16 accel.accel_cdev_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:31.504 18:22:16 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.504 18:22:16 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.504 18:22:16 accel.accel_cdev_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:31.504 18:22:16 accel.accel_cdev_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:31.504 18:22:16 accel.accel_cdev_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:31.504 18:22:16 accel.accel_cdev_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.504 18:22:16 accel.accel_cdev_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.504 18:22:16 accel.accel_cdev_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.504 18:22:16 accel.accel_cdev_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.504 18:22:16 accel.accel_cdev_decomp_full -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:31.504 18:22:16 accel.accel_cdev_decomp_full -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:31.504 18:22:16 accel.accel_cdev_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:31.504 18:22:16 accel.accel_cdev_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:31.504 [2024-07-15 18:22:16.917235] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:31.504 [2024-07-15 18:22:16.917290] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2729605 ] 00:07:31.504 [2024-07-15 18:22:17.014266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.763 [2024-07-15 18:22:17.108907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.331 [2024-07-15 18:22:17.666674] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:32.332 [2024-07-15 18:22:17.668945] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1711a80 PMD being used: compress_qat 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:32.332 [2024-07-15 18:22:17.672126] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1714dd0 PMD being used: compress_qat 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.332 18:22:17 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.709 18:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:33.709 18:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.709 18:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.709 18:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.709 18:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:33.709 18:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.709 18:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.709 18:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.709 18:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:33.709 18:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.709 18:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.709 18:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.709 18:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:33.709 18:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.709 18:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.709 18:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.709 18:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:33.709 18:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.709 18:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.709 18:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.709 18:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:33.709 18:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.709 18:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.709 18:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.709 18:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:33.709 18:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:33.709 18:22:18 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:33.709 00:07:33.709 real 0m1.946s 00:07:33.709 user 0m1.554s 00:07:33.709 sys 0m0.387s 00:07:33.709 18:22:18 accel.accel_cdev_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.709 18:22:18 accel.accel_cdev_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:33.710 ************************************ 00:07:33.710 END TEST accel_cdev_decomp_full 00:07:33.710 ************************************ 00:07:33.710 18:22:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:33.710 18:22:18 accel -- accel/accel.sh@130 -- # run_test accel_cdev_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:33.710 18:22:18 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:33.710 18:22:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.710 18:22:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.710 ************************************ 00:07:33.710 START TEST accel_cdev_decomp_mcore 00:07:33.710 ************************************ 00:07:33.710 18:22:18 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:33.710 18:22:18 accel.accel_cdev_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:33.710 18:22:18 accel.accel_cdev_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:33.710 18:22:18 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.710 18:22:18 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.710 18:22:18 accel.accel_cdev_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:33.710 18:22:18 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:33.710 18:22:18 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:33.710 18:22:18 accel.accel_cdev_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.710 18:22:18 accel.accel_cdev_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.710 18:22:18 accel.accel_cdev_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.710 18:22:18 accel.accel_cdev_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.710 18:22:18 accel.accel_cdev_decomp_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:33.710 18:22:18 accel.accel_cdev_decomp_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:33.710 18:22:18 accel.accel_cdev_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:33.710 18:22:18 accel.accel_cdev_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:33.710 [2024-07-15 18:22:18.930093] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:33.710 [2024-07-15 18:22:18.930145] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2730050 ] 00:07:33.710 [2024-07-15 18:22:19.026751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:33.710 [2024-07-15 18:22:19.120774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.710 [2024-07-15 18:22:19.120876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.710 [2024-07-15 18:22:19.120993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:33.710 [2024-07-15 18:22:19.120994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.277 [2024-07-15 18:22:19.667211] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:34.277 [2024-07-15 18:22:19.669486] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xc600c0 PMD being used: compress_qat 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:34.277 [2024-07-15 18:22:19.675185] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fc4f819b8b0 PMD being used: compress_qat 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.277 [2024-07-15 18:22:19.677541] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xc65440 PMD being used: compress_qat 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.277 [2024-07-15 18:22:19.679685] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fc4f019b8b0 PMD being used: compress_qat 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.277 [2024-07-15 18:22:19.680073] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fc4e819b8b0 PMD being used: compress_qat 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.277 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.278 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.278 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.278 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:34.278 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.278 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.278 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.278 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.278 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.278 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.278 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.278 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.278 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.278 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.278 18:22:19 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:35.653 00:07:35.653 real 0m1.956s 00:07:35.653 user 0m6.476s 00:07:35.653 sys 0m0.392s 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.653 18:22:20 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:35.653 ************************************ 00:07:35.653 END TEST accel_cdev_decomp_mcore 00:07:35.653 ************************************ 00:07:35.653 18:22:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:35.653 18:22:20 accel -- accel/accel.sh@131 -- # run_test accel_cdev_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:35.653 18:22:20 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:35.653 18:22:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.653 18:22:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.653 ************************************ 00:07:35.653 START TEST accel_cdev_decomp_full_mcore 00:07:35.653 ************************************ 00:07:35.653 18:22:20 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:35.653 18:22:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:35.653 18:22:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:35.653 18:22:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.653 18:22:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.653 18:22:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:35.653 18:22:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:35.653 18:22:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:35.653 18:22:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.653 18:22:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.653 18:22:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.653 18:22:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.653 18:22:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:35.653 18:22:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:35.653 18:22:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:35.653 18:22:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:35.653 [2024-07-15 18:22:20.951970] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:35.653 [2024-07-15 18:22:20.952025] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2730295 ] 00:07:35.653 [2024-07-15 18:22:21.048923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.653 [2024-07-15 18:22:21.143649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.653 [2024-07-15 18:22:21.143754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.653 [2024-07-15 18:22:21.143859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.653 [2024-07-15 18:22:21.143860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.220 [2024-07-15 18:22:21.693175] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:36.220 [2024-07-15 18:22:21.695451] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x25130c0 PMD being used: compress_qat 00:07:36.220 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.220 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.220 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.220 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.220 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.220 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.220 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.220 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.220 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.220 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.220 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.220 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.220 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:36.220 [2024-07-15 18:22:21.700236] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f679819b8b0 PMD being used: compress_qat 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.221 [2024-07-15 18:22:21.702621] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x2513160 PMD being used: compress_qat 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.221 [2024-07-15 18:22:21.705165] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f679019b8b0 PMD being used: compress_qat 00:07:36.221 [2024-07-15 18:22:21.705482] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f678819b8b0 PMD being used: compress_qat 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.221 18:22:21 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:37.598 00:07:37.598 real 0m1.963s 00:07:37.598 user 0m6.491s 00:07:37.598 sys 0m0.403s 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.598 18:22:22 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:37.598 ************************************ 00:07:37.598 END TEST accel_cdev_decomp_full_mcore 00:07:37.598 ************************************ 00:07:37.598 18:22:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:37.598 18:22:22 accel -- accel/accel.sh@132 -- # run_test accel_cdev_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:37.598 18:22:22 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:37.598 18:22:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.598 18:22:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.598 ************************************ 00:07:37.598 START TEST accel_cdev_decomp_mthread 00:07:37.598 ************************************ 00:07:37.598 18:22:22 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:37.598 18:22:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:37.598 18:22:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:37.598 18:22:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.598 18:22:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.598 18:22:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:37.598 18:22:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:37.598 18:22:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:37.598 18:22:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.598 18:22:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.598 18:22:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.598 18:22:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.598 18:22:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:37.598 18:22:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:37.598 18:22:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:37.598 18:22:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:37.599 [2024-07-15 18:22:22.986693] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:37.599 [2024-07-15 18:22:22.986747] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2730728 ] 00:07:37.599 [2024-07-15 18:22:23.085594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.867 [2024-07-15 18:22:23.180818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.436 [2024-07-15 18:22:23.732998] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:38.436 [2024-07-15 18:22:23.735269] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1bc7a80 PMD being used: compress_qat 00:07:38.436 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.437 [2024-07-15 18:22:23.740074] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1bccc40 PMD being used: compress_qat 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.437 [2024-07-15 18:22:23.742471] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1cefa30 PMD being used: compress_qat 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.437 18:22:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.396 18:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.396 18:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.396 18:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.396 18:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.396 18:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.396 18:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.396 18:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.396 18:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.396 18:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.396 18:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.396 18:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.396 18:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.396 18:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.396 18:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.396 18:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.396 18:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.396 18:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.396 18:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.396 18:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.396 18:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.396 18:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.396 18:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.396 18:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.396 18:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.396 18:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.396 18:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.396 18:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.396 18:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.396 18:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:39.396 18:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:39.396 18:22:24 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:39.396 00:07:39.396 real 0m1.950s 00:07:39.396 user 0m1.561s 00:07:39.396 sys 0m0.386s 00:07:39.396 18:22:24 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.396 18:22:24 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:39.396 ************************************ 00:07:39.396 END TEST accel_cdev_decomp_mthread 00:07:39.396 ************************************ 00:07:39.396 18:22:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:39.396 18:22:24 accel -- accel/accel.sh@133 -- # run_test accel_cdev_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:39.396 18:22:24 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:39.396 18:22:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.396 18:22:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.654 ************************************ 00:07:39.654 START TEST accel_cdev_decomp_full_mthread 00:07:39.654 ************************************ 00:07:39.654 18:22:24 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:39.654 18:22:24 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:39.654 18:22:24 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:39.654 18:22:24 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.655 18:22:24 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.655 18:22:24 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:39.655 18:22:24 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:39.655 18:22:24 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:39.655 18:22:24 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.655 18:22:24 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.655 18:22:24 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.655 18:22:24 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.655 18:22:24 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:39.655 18:22:24 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:39.655 18:22:24 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:39.655 18:22:24 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:39.655 [2024-07-15 18:22:25.012191] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:39.655 [2024-07-15 18:22:25.012291] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2730987 ] 00:07:39.655 [2024-07-15 18:22:25.149274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.913 [2024-07-15 18:22:25.248282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.576 [2024-07-15 18:22:25.803172] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:40.576 [2024-07-15 18:22:25.805438] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1aeda80 PMD being used: compress_qat 00:07:40.576 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.576 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.576 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.576 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.576 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.576 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.576 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.576 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.576 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.576 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.576 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.576 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.577 [2024-07-15 18:22:25.809391] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1af0dd0 PMD being used: compress_qat 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.577 [2024-07-15 18:22:25.812121] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1c15660 PMD being used: compress_qat 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.577 18:22:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.513 18:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.513 18:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.513 18:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.513 18:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.513 18:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.513 18:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.513 18:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.513 18:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.513 18:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.513 18:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.513 18:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.513 18:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.513 18:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.513 18:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.513 18:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.513 18:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.513 18:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.513 18:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.513 18:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.513 18:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.513 18:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.513 18:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.513 18:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.513 18:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.513 18:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.513 18:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.513 18:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.513 18:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.513 18:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:41.513 18:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:41.513 18:22:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:41.513 00:07:41.513 real 0m2.002s 00:07:41.513 user 0m1.568s 00:07:41.513 sys 0m0.429s 00:07:41.513 18:22:26 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.513 18:22:26 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:41.513 ************************************ 00:07:41.513 END TEST accel_cdev_decomp_full_mthread 00:07:41.513 ************************************ 00:07:41.513 18:22:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:41.513 18:22:27 accel -- accel/accel.sh@134 -- # unset COMPRESSDEV 00:07:41.513 18:22:27 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:41.513 18:22:27 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:41.513 18:22:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.513 18:22:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.513 18:22:27 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:41.513 18:22:27 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.513 18:22:27 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.513 18:22:27 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.513 18:22:27 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.513 18:22:27 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.513 18:22:27 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:41.513 18:22:27 accel -- accel/accel.sh@41 -- # jq -r . 00:07:41.513 ************************************ 00:07:41.513 START TEST accel_dif_functional_tests 00:07:41.513 ************************************ 00:07:41.513 18:22:27 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:41.772 [2024-07-15 18:22:27.094497] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:41.772 [2024-07-15 18:22:27.094548] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2731403 ] 00:07:41.772 [2024-07-15 18:22:27.190497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:41.772 [2024-07-15 18:22:27.283886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.772 [2024-07-15 18:22:27.283988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.772 [2024-07-15 18:22:27.283991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.031 00:07:42.031 00:07:42.031 CUnit - A unit testing framework for C - Version 2.1-3 00:07:42.031 http://cunit.sourceforge.net/ 00:07:42.031 00:07:42.031 00:07:42.031 Suite: accel_dif 00:07:42.031 Test: verify: DIF generated, GUARD check ...passed 00:07:42.031 Test: verify: DIF generated, APPTAG check ...passed 00:07:42.031 Test: verify: DIF generated, REFTAG check ...passed 00:07:42.031 Test: verify: DIF not generated, GUARD check ...[2024-07-15 18:22:27.371902] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:42.031 passed 00:07:42.031 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 18:22:27.371982] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:42.031 passed 00:07:42.031 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 18:22:27.372018] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:42.031 passed 00:07:42.031 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:42.031 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 18:22:27.372088] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:42.031 passed 00:07:42.031 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:42.031 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:42.031 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:42.031 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 18:22:27.372246] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:42.031 passed 00:07:42.031 Test: verify copy: DIF generated, GUARD check ...passed 00:07:42.031 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:42.031 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:42.031 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 18:22:27.372425] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:42.031 passed 00:07:42.031 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 18:22:27.372460] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:42.031 passed 00:07:42.031 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 18:22:27.372495] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:42.031 passed 00:07:42.031 Test: generate copy: DIF generated, GUARD check ...passed 00:07:42.031 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:42.031 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:42.031 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:42.031 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:42.031 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:42.031 Test: generate copy: iovecs-len validate ...[2024-07-15 18:22:27.372761] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:42.031 passed 00:07:42.031 Test: generate copy: buffer alignment validate ...passed 00:07:42.031 00:07:42.031 Run Summary: Type Total Ran Passed Failed Inactive 00:07:42.031 suites 1 1 n/a 0 0 00:07:42.031 tests 26 26 26 0 0 00:07:42.031 asserts 115 115 115 0 n/a 00:07:42.031 00:07:42.031 Elapsed time = 0.002 seconds 00:07:42.031 00:07:42.031 real 0m0.520s 00:07:42.031 user 0m0.728s 00:07:42.031 sys 0m0.187s 00:07:42.031 18:22:27 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.031 18:22:27 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:42.031 ************************************ 00:07:42.031 END TEST accel_dif_functional_tests 00:07:42.031 ************************************ 00:07:42.290 18:22:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:42.290 00:07:42.290 real 0m49.629s 00:07:42.290 user 0m59.326s 00:07:42.290 sys 0m9.051s 00:07:42.290 18:22:27 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.290 18:22:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.290 ************************************ 00:07:42.290 END TEST accel 00:07:42.290 ************************************ 00:07:42.290 18:22:27 -- common/autotest_common.sh@1142 -- # return 0 00:07:42.290 18:22:27 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:42.290 18:22:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:42.290 18:22:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.290 18:22:27 -- common/autotest_common.sh@10 -- # set +x 00:07:42.290 ************************************ 00:07:42.290 START TEST accel_rpc 00:07:42.290 ************************************ 00:07:42.290 18:22:27 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:42.290 * Looking for test storage... 00:07:42.290 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:07:42.290 18:22:27 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:42.290 18:22:27 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2731503 00:07:42.290 18:22:27 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2731503 00:07:42.290 18:22:27 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:42.290 18:22:27 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 2731503 ']' 00:07:42.290 18:22:27 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.290 18:22:27 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:42.290 18:22:27 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.290 18:22:27 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:42.290 18:22:27 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.290 [2024-07-15 18:22:27.822392] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:42.290 [2024-07-15 18:22:27.822457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2731503 ] 00:07:42.548 [2024-07-15 18:22:27.925348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.548 [2024-07-15 18:22:28.017551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.483 18:22:28 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:43.484 18:22:28 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:43.484 18:22:28 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:43.484 18:22:28 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:43.484 18:22:28 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:43.484 18:22:28 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:43.484 18:22:28 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:43.484 18:22:28 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:43.484 18:22:28 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.484 18:22:28 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.484 ************************************ 00:07:43.484 START TEST accel_assign_opcode 00:07:43.484 ************************************ 00:07:43.484 18:22:28 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:43.484 18:22:28 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:43.484 18:22:28 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.484 18:22:28 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:43.484 [2024-07-15 18:22:28.803976] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:43.484 18:22:28 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.484 18:22:28 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:43.484 18:22:28 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.484 18:22:28 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:43.484 [2024-07-15 18:22:28.816002] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:43.484 18:22:28 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.484 18:22:28 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:43.484 18:22:28 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.484 18:22:28 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:43.484 18:22:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.743 18:22:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:43.743 18:22:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.743 18:22:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:43.743 18:22:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:43.743 18:22:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:43.743 18:22:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.743 software 00:07:43.743 00:07:43.743 real 0m0.281s 00:07:43.743 user 0m0.050s 00:07:43.743 sys 0m0.009s 00:07:43.743 18:22:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.743 18:22:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:43.743 ************************************ 00:07:43.743 END TEST accel_assign_opcode 00:07:43.743 ************************************ 00:07:43.743 18:22:29 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:43.743 18:22:29 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2731503 00:07:43.743 18:22:29 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 2731503 ']' 00:07:43.743 18:22:29 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 2731503 00:07:43.743 18:22:29 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:43.743 18:22:29 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:43.743 18:22:29 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2731503 00:07:43.743 18:22:29 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:43.743 18:22:29 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:43.743 18:22:29 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2731503' 00:07:43.743 killing process with pid 2731503 00:07:43.743 18:22:29 accel_rpc -- common/autotest_common.sh@967 -- # kill 2731503 00:07:43.743 18:22:29 accel_rpc -- common/autotest_common.sh@972 -- # wait 2731503 00:07:44.001 00:07:44.001 real 0m1.835s 00:07:44.001 user 0m1.983s 00:07:44.001 sys 0m0.500s 00:07:44.001 18:22:29 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.001 18:22:29 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.001 ************************************ 00:07:44.001 END TEST accel_rpc 00:07:44.001 ************************************ 00:07:44.002 18:22:29 -- common/autotest_common.sh@1142 -- # return 0 00:07:44.002 18:22:29 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:07:44.002 18:22:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:44.002 18:22:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.002 18:22:29 -- common/autotest_common.sh@10 -- # set +x 00:07:44.260 ************************************ 00:07:44.260 START TEST app_cmdline 00:07:44.260 ************************************ 00:07:44.260 18:22:29 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:07:44.260 * Looking for test storage... 00:07:44.260 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:07:44.260 18:22:29 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:44.260 18:22:29 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2731890 00:07:44.260 18:22:29 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2731890 00:07:44.260 18:22:29 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:44.261 18:22:29 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 2731890 ']' 00:07:44.261 18:22:29 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.261 18:22:29 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:44.261 18:22:29 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.261 18:22:29 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:44.261 18:22:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:44.261 [2024-07-15 18:22:29.726955] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:44.261 [2024-07-15 18:22:29.727023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2731890 ] 00:07:44.519 [2024-07-15 18:22:29.827167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.519 [2024-07-15 18:22:29.927559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.456 18:22:30 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:45.456 18:22:30 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:45.456 18:22:30 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:45.456 { 00:07:45.456 "version": "SPDK v24.09-pre git sha1 bdeef1ed3", 00:07:45.456 "fields": { 00:07:45.456 "major": 24, 00:07:45.456 "minor": 9, 00:07:45.456 "patch": 0, 00:07:45.456 "suffix": "-pre", 00:07:45.456 "commit": "bdeef1ed3" 00:07:45.456 } 00:07:45.456 } 00:07:45.456 18:22:30 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:45.456 18:22:30 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:45.456 18:22:30 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:45.456 18:22:30 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:45.456 18:22:30 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:45.456 18:22:30 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:45.456 18:22:30 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.456 18:22:30 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:45.456 18:22:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:45.456 18:22:30 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.456 18:22:30 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:45.456 18:22:30 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:45.456 18:22:30 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:45.456 18:22:30 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:45.456 18:22:30 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:45.456 18:22:30 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:07:45.456 18:22:30 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.456 18:22:30 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:07:45.456 18:22:30 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.456 18:22:30 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:07:45.456 18:22:30 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.456 18:22:30 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:07:45.456 18:22:30 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:07:45.456 18:22:30 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:45.714 request: 00:07:45.714 { 00:07:45.714 "method": "env_dpdk_get_mem_stats", 00:07:45.714 "req_id": 1 00:07:45.714 } 00:07:45.714 Got JSON-RPC error response 00:07:45.714 response: 00:07:45.714 { 00:07:45.714 "code": -32601, 00:07:45.714 "message": "Method not found" 00:07:45.714 } 00:07:45.714 18:22:31 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:45.714 18:22:31 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:45.714 18:22:31 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:45.714 18:22:31 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:45.714 18:22:31 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2731890 00:07:45.714 18:22:31 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 2731890 ']' 00:07:45.714 18:22:31 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 2731890 00:07:45.714 18:22:31 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:45.714 18:22:31 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:45.714 18:22:31 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2731890 00:07:45.973 18:22:31 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:45.973 18:22:31 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:45.973 18:22:31 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2731890' 00:07:45.973 killing process with pid 2731890 00:07:45.973 18:22:31 app_cmdline -- common/autotest_common.sh@967 -- # kill 2731890 00:07:45.973 18:22:31 app_cmdline -- common/autotest_common.sh@972 -- # wait 2731890 00:07:46.232 00:07:46.232 real 0m2.058s 00:07:46.232 user 0m2.654s 00:07:46.232 sys 0m0.494s 00:07:46.232 18:22:31 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.232 18:22:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:46.232 ************************************ 00:07:46.232 END TEST app_cmdline 00:07:46.232 ************************************ 00:07:46.232 18:22:31 -- common/autotest_common.sh@1142 -- # return 0 00:07:46.232 18:22:31 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:07:46.232 18:22:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:46.232 18:22:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.232 18:22:31 -- common/autotest_common.sh@10 -- # set +x 00:07:46.232 ************************************ 00:07:46.232 START TEST version 00:07:46.232 ************************************ 00:07:46.232 18:22:31 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:07:46.232 * Looking for test storage... 00:07:46.232 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:07:46.232 18:22:31 version -- app/version.sh@17 -- # get_header_version major 00:07:46.232 18:22:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:07:46.232 18:22:31 version -- app/version.sh@14 -- # cut -f2 00:07:46.492 18:22:31 version -- app/version.sh@14 -- # tr -d '"' 00:07:46.492 18:22:31 version -- app/version.sh@17 -- # major=24 00:07:46.492 18:22:31 version -- app/version.sh@18 -- # get_header_version minor 00:07:46.492 18:22:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:07:46.492 18:22:31 version -- app/version.sh@14 -- # cut -f2 00:07:46.492 18:22:31 version -- app/version.sh@14 -- # tr -d '"' 00:07:46.492 18:22:31 version -- app/version.sh@18 -- # minor=9 00:07:46.492 18:22:31 version -- app/version.sh@19 -- # get_header_version patch 00:07:46.492 18:22:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:07:46.492 18:22:31 version -- app/version.sh@14 -- # cut -f2 00:07:46.492 18:22:31 version -- app/version.sh@14 -- # tr -d '"' 00:07:46.492 18:22:31 version -- app/version.sh@19 -- # patch=0 00:07:46.492 18:22:31 version -- app/version.sh@20 -- # get_header_version suffix 00:07:46.492 18:22:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:07:46.492 18:22:31 version -- app/version.sh@14 -- # cut -f2 00:07:46.492 18:22:31 version -- app/version.sh@14 -- # tr -d '"' 00:07:46.492 18:22:31 version -- app/version.sh@20 -- # suffix=-pre 00:07:46.492 18:22:31 version -- app/version.sh@22 -- # version=24.9 00:07:46.492 18:22:31 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:46.492 18:22:31 version -- app/version.sh@28 -- # version=24.9rc0 00:07:46.492 18:22:31 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:07:46.492 18:22:31 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:46.492 18:22:31 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:46.492 18:22:31 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:46.492 00:07:46.492 real 0m0.190s 00:07:46.492 user 0m0.111s 00:07:46.492 sys 0m0.115s 00:07:46.492 18:22:31 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.492 18:22:31 version -- common/autotest_common.sh@10 -- # set +x 00:07:46.492 ************************************ 00:07:46.492 END TEST version 00:07:46.492 ************************************ 00:07:46.492 18:22:31 -- common/autotest_common.sh@1142 -- # return 0 00:07:46.492 18:22:31 -- spdk/autotest.sh@188 -- # '[' 1 -eq 1 ']' 00:07:46.492 18:22:31 -- spdk/autotest.sh@189 -- # run_test blockdev_general /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:07:46.492 18:22:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:46.492 18:22:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.492 18:22:31 -- common/autotest_common.sh@10 -- # set +x 00:07:46.492 ************************************ 00:07:46.492 START TEST blockdev_general 00:07:46.492 ************************************ 00:07:46.492 18:22:31 blockdev_general -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:07:46.492 * Looking for test storage... 00:07:46.492 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:07:46.492 18:22:32 blockdev_general -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:46.492 18:22:32 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:07:46.492 18:22:32 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:46.492 18:22:32 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:07:46.492 18:22:32 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:07:46.492 18:22:32 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:07:46.492 18:22:32 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:46.492 18:22:32 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:46.492 18:22:32 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:07:46.492 18:22:32 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:07:46.492 18:22:32 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:07:46.492 18:22:32 blockdev_general -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:07:46.751 18:22:32 blockdev_general -- bdev/blockdev.sh@674 -- # uname -s 00:07:46.751 18:22:32 blockdev_general -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:07:46.751 18:22:32 blockdev_general -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:07:46.751 18:22:32 blockdev_general -- bdev/blockdev.sh@682 -- # test_type=bdev 00:07:46.751 18:22:32 blockdev_general -- bdev/blockdev.sh@683 -- # crypto_device= 00:07:46.751 18:22:32 blockdev_general -- bdev/blockdev.sh@684 -- # dek= 00:07:46.751 18:22:32 blockdev_general -- bdev/blockdev.sh@685 -- # env_ctx= 00:07:46.751 18:22:32 blockdev_general -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:07:46.751 18:22:32 blockdev_general -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:07:46.751 18:22:32 blockdev_general -- bdev/blockdev.sh@690 -- # [[ bdev == bdev ]] 00:07:46.751 18:22:32 blockdev_general -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:07:46.751 18:22:32 blockdev_general -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:07:46.751 18:22:32 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=2732349 00:07:46.751 18:22:32 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:46.751 18:22:32 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 2732349 00:07:46.751 18:22:32 blockdev_general -- common/autotest_common.sh@829 -- # '[' -z 2732349 ']' 00:07:46.751 18:22:32 blockdev_general -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:07:46.751 18:22:32 blockdev_general -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.751 18:22:32 blockdev_general -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:46.751 18:22:32 blockdev_general -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.751 18:22:32 blockdev_general -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:46.751 18:22:32 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:46.752 [2024-07-15 18:22:32.117157] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:46.752 [2024-07-15 18:22:32.117219] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2732349 ] 00:07:46.752 [2024-07-15 18:22:32.214476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.010 [2024-07-15 18:22:32.311674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.959 18:22:33 blockdev_general -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:47.959 18:22:33 blockdev_general -- common/autotest_common.sh@862 -- # return 0 00:07:47.959 18:22:33 blockdev_general -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:07:47.959 18:22:33 blockdev_general -- bdev/blockdev.sh@696 -- # setup_bdev_conf 00:07:47.959 18:22:33 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:07:47.959 18:22:33 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.959 18:22:33 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:48.221 [2024-07-15 18:22:33.537240] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:48.221 [2024-07-15 18:22:33.537293] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:48.221 00:07:48.221 [2024-07-15 18:22:33.545230] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:48.221 [2024-07-15 18:22:33.545254] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:48.221 00:07:48.221 Malloc0 00:07:48.221 Malloc1 00:07:48.221 Malloc2 00:07:48.221 Malloc3 00:07:48.221 Malloc4 00:07:48.221 Malloc5 00:07:48.221 Malloc6 00:07:48.221 Malloc7 00:07:48.221 Malloc8 00:07:48.221 Malloc9 00:07:48.221 [2024-07-15 18:22:33.682003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:48.221 [2024-07-15 18:22:33.682051] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.221 [2024-07-15 18:22:33.682067] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15be500 00:07:48.221 [2024-07-15 18:22:33.682076] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.221 [2024-07-15 18:22:33.683497] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.221 [2024-07-15 18:22:33.683522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:48.221 TestPT 00:07:48.221 18:22:33 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.221 18:22:33 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile bs=2048 count=5000 00:07:48.221 5000+0 records in 00:07:48.221 5000+0 records out 00:07:48.221 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0192186 s, 533 MB/s 00:07:48.221 18:22:33 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile AIO0 2048 00:07:48.221 18:22:33 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.221 18:22:33 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:48.481 AIO0 00:07:48.481 18:22:33 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.481 18:22:33 blockdev_general -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:07:48.481 18:22:33 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.481 18:22:33 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:48.481 18:22:33 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.481 18:22:33 blockdev_general -- bdev/blockdev.sh@740 -- # cat 00:07:48.481 18:22:33 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:07:48.481 18:22:33 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.481 18:22:33 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:48.481 18:22:33 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.481 18:22:33 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:07:48.481 18:22:33 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.481 18:22:33 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:48.481 18:22:33 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.481 18:22:33 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:48.481 18:22:33 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.481 18:22:33 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:48.481 18:22:33 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.481 18:22:33 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:07:48.481 18:22:33 blockdev_general -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:07:48.481 18:22:33 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:07:48.481 18:22:33 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.481 18:22:33 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:48.481 18:22:33 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.481 18:22:33 blockdev_general -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:07:48.481 18:22:33 blockdev_general -- bdev/blockdev.sh@749 -- # jq -r .name 00:07:48.482 18:22:33 blockdev_general -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "839d2af1-a043-4a98-a486-f95d7b2ea68b"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "839d2af1-a043-4a98-a486-f95d7b2ea68b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "4075c235-5c0a-5937-8927-2554a3d497db"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "4075c235-5c0a-5937-8927-2554a3d497db",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "5178f687-80d5-5c76-b1a9-3cc18fad4ca3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "5178f687-80d5-5c76-b1a9-3cc18fad4ca3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "90037c3c-0be4-5146-adff-dca7c5787a7e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "90037c3c-0be4-5146-adff-dca7c5787a7e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "cce9ebbd-16c8-53bd-9373-2154283331af"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cce9ebbd-16c8-53bd-9373-2154283331af",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "7218666c-f212-51cb-b5f8-8119370a7cd9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7218666c-f212-51cb-b5f8-8119370a7cd9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "4b86e520-f274-5228-8eb3-a9610ed248d2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4b86e520-f274-5228-8eb3-a9610ed248d2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "b00228af-96d0-5837-8eb3-d025f581157b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b00228af-96d0-5837-8eb3-d025f581157b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "89b78de7-5e33-55ce-8ae2-7bb2bc081bfa"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "89b78de7-5e33-55ce-8ae2-7bb2bc081bfa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "7f83f4e4-7da2-575d-b308-4bb3845000b8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7f83f4e4-7da2-575d-b308-4bb3845000b8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "bcca6c30-5f81-5bd4-bbb7-09ede0c1f99a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bcca6c30-5f81-5bd4-bbb7-09ede0c1f99a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "628ca2ba-4822-5ab6-b56a-06c0460fbfbd"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "628ca2ba-4822-5ab6-b56a-06c0460fbfbd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "52260f93-2034-427a-af04-989a6745ed4d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "52260f93-2034-427a-af04-989a6745ed4d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "52260f93-2034-427a-af04-989a6745ed4d",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "cd1a18e8-e6cc-4250-b38c-03fd1aee0dc9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "ab86c264-a1ad-40f8-9bcd-c801729f8847",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "c73e9a77-5fac-4cf8-9d7a-fde5eb6b1209"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "c73e9a77-5fac-4cf8-9d7a-fde5eb6b1209",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c73e9a77-5fac-4cf8-9d7a-fde5eb6b1209",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "dd2fc882-987f-4efd-b7e2-85df35e81baa",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "ca11710c-6042-4bca-ade1-3f9eb9d394c2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "fc29308f-8e27-4d7a-b3fa-d47c10802428"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "fc29308f-8e27-4d7a-b3fa-d47c10802428",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "fc29308f-8e27-4d7a-b3fa-d47c10802428",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "3f6ac55e-bcad-4a77-8770-b3c86976bd63",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "9e090615-de9d-42a6-b5f6-dc540830e7c6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "cb18e894-fbf2-42cd-a33f-194ba78fc327"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "cb18e894-fbf2-42cd-a33f-194ba78fc327",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:07:48.741 18:22:34 blockdev_general -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:07:48.741 18:22:34 blockdev_general -- bdev/blockdev.sh@752 -- # hello_world_bdev=Malloc0 00:07:48.741 18:22:34 blockdev_general -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:07:48.741 18:22:34 blockdev_general -- bdev/blockdev.sh@754 -- # killprocess 2732349 00:07:48.741 18:22:34 blockdev_general -- common/autotest_common.sh@948 -- # '[' -z 2732349 ']' 00:07:48.741 18:22:34 blockdev_general -- common/autotest_common.sh@952 -- # kill -0 2732349 00:07:48.741 18:22:34 blockdev_general -- common/autotest_common.sh@953 -- # uname 00:07:48.741 18:22:34 blockdev_general -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:48.741 18:22:34 blockdev_general -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2732349 00:07:48.741 18:22:34 blockdev_general -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:48.741 18:22:34 blockdev_general -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:48.741 18:22:34 blockdev_general -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2732349' 00:07:48.741 killing process with pid 2732349 00:07:48.741 18:22:34 blockdev_general -- common/autotest_common.sh@967 -- # kill 2732349 00:07:48.741 18:22:34 blockdev_general -- common/autotest_common.sh@972 -- # wait 2732349 00:07:49.001 18:22:34 blockdev_general -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:49.001 18:22:34 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:07:49.001 18:22:34 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:49.001 18:22:34 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.001 18:22:34 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:49.001 ************************************ 00:07:49.001 START TEST bdev_hello_world 00:07:49.001 ************************************ 00:07:49.001 18:22:34 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:07:49.260 [2024-07-15 18:22:34.603419] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:49.260 [2024-07-15 18:22:34.603474] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2732803 ] 00:07:49.260 [2024-07-15 18:22:34.704479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.260 [2024-07-15 18:22:34.794853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.519 [2024-07-15 18:22:34.945444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:49.519 [2024-07-15 18:22:34.945503] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:49.519 [2024-07-15 18:22:34.945516] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:49.519 [2024-07-15 18:22:34.953450] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:49.519 [2024-07-15 18:22:34.953481] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:49.519 [2024-07-15 18:22:34.961456] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:49.519 [2024-07-15 18:22:34.961479] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:49.519 [2024-07-15 18:22:35.033696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:49.519 [2024-07-15 18:22:35.033746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.519 [2024-07-15 18:22:35.033761] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1607360 00:07:49.519 [2024-07-15 18:22:35.033770] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.519 [2024-07-15 18:22:35.035274] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.519 [2024-07-15 18:22:35.035303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:49.778 [2024-07-15 18:22:35.180404] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:49.778 [2024-07-15 18:22:35.180468] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:07:49.778 [2024-07-15 18:22:35.180514] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:49.778 [2024-07-15 18:22:35.180581] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:49.778 [2024-07-15 18:22:35.180649] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:49.778 [2024-07-15 18:22:35.180671] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:49.778 [2024-07-15 18:22:35.180725] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:49.778 00:07:49.778 [2024-07-15 18:22:35.180758] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:50.036 00:07:50.036 real 0m0.933s 00:07:50.036 user 0m0.637s 00:07:50.036 sys 0m0.253s 00:07:50.036 18:22:35 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.036 18:22:35 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:50.036 ************************************ 00:07:50.036 END TEST bdev_hello_world 00:07:50.036 ************************************ 00:07:50.036 18:22:35 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:07:50.036 18:22:35 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:07:50.036 18:22:35 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:50.036 18:22:35 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.036 18:22:35 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:50.036 ************************************ 00:07:50.036 START TEST bdev_bounds 00:07:50.036 ************************************ 00:07:50.036 18:22:35 blockdev_general.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:07:50.036 18:22:35 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=2733036 00:07:50.036 18:22:35 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:07:50.036 18:22:35 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:50.036 18:22:35 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 2733036' 00:07:50.036 Process bdevio pid: 2733036 00:07:50.036 18:22:35 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 2733036 00:07:50.036 18:22:35 blockdev_general.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 2733036 ']' 00:07:50.036 18:22:35 blockdev_general.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.036 18:22:35 blockdev_general.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:50.036 18:22:35 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.036 18:22:35 blockdev_general.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:50.036 18:22:35 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:50.294 [2024-07-15 18:22:35.603330] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:50.294 [2024-07-15 18:22:35.603387] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2733036 ] 00:07:50.294 [2024-07-15 18:22:35.695675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:50.294 [2024-07-15 18:22:35.791671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.294 [2024-07-15 18:22:35.791774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.294 [2024-07-15 18:22:35.791775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.551 [2024-07-15 18:22:35.938224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:50.551 [2024-07-15 18:22:35.938281] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:50.551 [2024-07-15 18:22:35.938297] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:50.551 [2024-07-15 18:22:35.946238] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:50.551 [2024-07-15 18:22:35.946263] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:50.551 [2024-07-15 18:22:35.954253] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:50.551 [2024-07-15 18:22:35.954275] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:50.551 [2024-07-15 18:22:36.026539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:50.551 [2024-07-15 18:22:36.026588] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.551 [2024-07-15 18:22:36.026604] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2c4efc0 00:07:50.551 [2024-07-15 18:22:36.026613] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.551 [2024-07-15 18:22:36.028146] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.551 [2024-07-15 18:22:36.028174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:51.118 18:22:36 blockdev_general.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:51.118 18:22:36 blockdev_general.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:07:51.118 18:22:36 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:51.377 I/O targets: 00:07:51.377 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:07:51.377 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:07:51.377 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:07:51.377 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:07:51.377 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:07:51.377 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:07:51.377 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:07:51.377 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:07:51.377 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:07:51.377 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:07:51.377 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:07:51.377 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:07:51.377 raid0: 131072 blocks of 512 bytes (64 MiB) 00:07:51.377 concat0: 131072 blocks of 512 bytes (64 MiB) 00:07:51.377 raid1: 65536 blocks of 512 bytes (32 MiB) 00:07:51.377 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:07:51.377 00:07:51.377 00:07:51.377 CUnit - A unit testing framework for C - Version 2.1-3 00:07:51.377 http://cunit.sourceforge.net/ 00:07:51.377 00:07:51.377 00:07:51.377 Suite: bdevio tests on: AIO0 00:07:51.377 Test: blockdev write read block ...passed 00:07:51.377 Test: blockdev write zeroes read block ...passed 00:07:51.377 Test: blockdev write zeroes read no split ...passed 00:07:51.377 Test: blockdev write zeroes read split ...passed 00:07:51.377 Test: blockdev write zeroes read split partial ...passed 00:07:51.377 Test: blockdev reset ...passed 00:07:51.377 Test: blockdev write read 8 blocks ...passed 00:07:51.377 Test: blockdev write read size > 128k ...passed 00:07:51.377 Test: blockdev write read invalid size ...passed 00:07:51.377 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.377 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.377 Test: blockdev write read max offset ...passed 00:07:51.377 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.377 Test: blockdev writev readv 8 blocks ...passed 00:07:51.377 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.377 Test: blockdev writev readv block ...passed 00:07:51.377 Test: blockdev writev readv size > 128k ...passed 00:07:51.377 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.377 Test: blockdev comparev and writev ...passed 00:07:51.377 Test: blockdev nvme passthru rw ...passed 00:07:51.377 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.377 Test: blockdev nvme admin passthru ...passed 00:07:51.377 Test: blockdev copy ...passed 00:07:51.377 Suite: bdevio tests on: raid1 00:07:51.377 Test: blockdev write read block ...passed 00:07:51.377 Test: blockdev write zeroes read block ...passed 00:07:51.377 Test: blockdev write zeroes read no split ...passed 00:07:51.377 Test: blockdev write zeroes read split ...passed 00:07:51.377 Test: blockdev write zeroes read split partial ...passed 00:07:51.377 Test: blockdev reset ...passed 00:07:51.377 Test: blockdev write read 8 blocks ...passed 00:07:51.377 Test: blockdev write read size > 128k ...passed 00:07:51.377 Test: blockdev write read invalid size ...passed 00:07:51.377 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.377 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.377 Test: blockdev write read max offset ...passed 00:07:51.377 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.377 Test: blockdev writev readv 8 blocks ...passed 00:07:51.377 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.377 Test: blockdev writev readv block ...passed 00:07:51.377 Test: blockdev writev readv size > 128k ...passed 00:07:51.377 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.377 Test: blockdev comparev and writev ...passed 00:07:51.377 Test: blockdev nvme passthru rw ...passed 00:07:51.377 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.377 Test: blockdev nvme admin passthru ...passed 00:07:51.377 Test: blockdev copy ...passed 00:07:51.377 Suite: bdevio tests on: concat0 00:07:51.377 Test: blockdev write read block ...passed 00:07:51.377 Test: blockdev write zeroes read block ...passed 00:07:51.377 Test: blockdev write zeroes read no split ...passed 00:07:51.377 Test: blockdev write zeroes read split ...passed 00:07:51.377 Test: blockdev write zeroes read split partial ...passed 00:07:51.377 Test: blockdev reset ...passed 00:07:51.377 Test: blockdev write read 8 blocks ...passed 00:07:51.377 Test: blockdev write read size > 128k ...passed 00:07:51.377 Test: blockdev write read invalid size ...passed 00:07:51.377 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.377 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.377 Test: blockdev write read max offset ...passed 00:07:51.377 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.377 Test: blockdev writev readv 8 blocks ...passed 00:07:51.377 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.377 Test: blockdev writev readv block ...passed 00:07:51.377 Test: blockdev writev readv size > 128k ...passed 00:07:51.377 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.377 Test: blockdev comparev and writev ...passed 00:07:51.377 Test: blockdev nvme passthru rw ...passed 00:07:51.377 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.377 Test: blockdev nvme admin passthru ...passed 00:07:51.377 Test: blockdev copy ...passed 00:07:51.378 Suite: bdevio tests on: raid0 00:07:51.378 Test: blockdev write read block ...passed 00:07:51.378 Test: blockdev write zeroes read block ...passed 00:07:51.378 Test: blockdev write zeroes read no split ...passed 00:07:51.378 Test: blockdev write zeroes read split ...passed 00:07:51.378 Test: blockdev write zeroes read split partial ...passed 00:07:51.378 Test: blockdev reset ...passed 00:07:51.378 Test: blockdev write read 8 blocks ...passed 00:07:51.378 Test: blockdev write read size > 128k ...passed 00:07:51.378 Test: blockdev write read invalid size ...passed 00:07:51.378 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.378 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.378 Test: blockdev write read max offset ...passed 00:07:51.378 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.378 Test: blockdev writev readv 8 blocks ...passed 00:07:51.378 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.378 Test: blockdev writev readv block ...passed 00:07:51.378 Test: blockdev writev readv size > 128k ...passed 00:07:51.378 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.378 Test: blockdev comparev and writev ...passed 00:07:51.378 Test: blockdev nvme passthru rw ...passed 00:07:51.378 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.378 Test: blockdev nvme admin passthru ...passed 00:07:51.378 Test: blockdev copy ...passed 00:07:51.378 Suite: bdevio tests on: TestPT 00:07:51.378 Test: blockdev write read block ...passed 00:07:51.378 Test: blockdev write zeroes read block ...passed 00:07:51.378 Test: blockdev write zeroes read no split ...passed 00:07:51.378 Test: blockdev write zeroes read split ...passed 00:07:51.378 Test: blockdev write zeroes read split partial ...passed 00:07:51.378 Test: blockdev reset ...passed 00:07:51.378 Test: blockdev write read 8 blocks ...passed 00:07:51.378 Test: blockdev write read size > 128k ...passed 00:07:51.378 Test: blockdev write read invalid size ...passed 00:07:51.378 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.378 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.378 Test: blockdev write read max offset ...passed 00:07:51.378 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.378 Test: blockdev writev readv 8 blocks ...passed 00:07:51.378 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.378 Test: blockdev writev readv block ...passed 00:07:51.378 Test: blockdev writev readv size > 128k ...passed 00:07:51.378 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.378 Test: blockdev comparev and writev ...passed 00:07:51.378 Test: blockdev nvme passthru rw ...passed 00:07:51.378 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.378 Test: blockdev nvme admin passthru ...passed 00:07:51.378 Test: blockdev copy ...passed 00:07:51.378 Suite: bdevio tests on: Malloc2p7 00:07:51.378 Test: blockdev write read block ...passed 00:07:51.378 Test: blockdev write zeroes read block ...passed 00:07:51.378 Test: blockdev write zeroes read no split ...passed 00:07:51.378 Test: blockdev write zeroes read split ...passed 00:07:51.378 Test: blockdev write zeroes read split partial ...passed 00:07:51.378 Test: blockdev reset ...passed 00:07:51.378 Test: blockdev write read 8 blocks ...passed 00:07:51.378 Test: blockdev write read size > 128k ...passed 00:07:51.378 Test: blockdev write read invalid size ...passed 00:07:51.378 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.378 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.378 Test: blockdev write read max offset ...passed 00:07:51.378 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.378 Test: blockdev writev readv 8 blocks ...passed 00:07:51.378 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.378 Test: blockdev writev readv block ...passed 00:07:51.378 Test: blockdev writev readv size > 128k ...passed 00:07:51.378 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.378 Test: blockdev comparev and writev ...passed 00:07:51.378 Test: blockdev nvme passthru rw ...passed 00:07:51.378 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.378 Test: blockdev nvme admin passthru ...passed 00:07:51.378 Test: blockdev copy ...passed 00:07:51.378 Suite: bdevio tests on: Malloc2p6 00:07:51.378 Test: blockdev write read block ...passed 00:07:51.378 Test: blockdev write zeroes read block ...passed 00:07:51.378 Test: blockdev write zeroes read no split ...passed 00:07:51.378 Test: blockdev write zeroes read split ...passed 00:07:51.378 Test: blockdev write zeroes read split partial ...passed 00:07:51.378 Test: blockdev reset ...passed 00:07:51.378 Test: blockdev write read 8 blocks ...passed 00:07:51.378 Test: blockdev write read size > 128k ...passed 00:07:51.378 Test: blockdev write read invalid size ...passed 00:07:51.378 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.378 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.378 Test: blockdev write read max offset ...passed 00:07:51.378 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.378 Test: blockdev writev readv 8 blocks ...passed 00:07:51.378 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.378 Test: blockdev writev readv block ...passed 00:07:51.378 Test: blockdev writev readv size > 128k ...passed 00:07:51.378 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.378 Test: blockdev comparev and writev ...passed 00:07:51.378 Test: blockdev nvme passthru rw ...passed 00:07:51.378 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.378 Test: blockdev nvme admin passthru ...passed 00:07:51.378 Test: blockdev copy ...passed 00:07:51.378 Suite: bdevio tests on: Malloc2p5 00:07:51.378 Test: blockdev write read block ...passed 00:07:51.378 Test: blockdev write zeroes read block ...passed 00:07:51.378 Test: blockdev write zeroes read no split ...passed 00:07:51.378 Test: blockdev write zeroes read split ...passed 00:07:51.378 Test: blockdev write zeroes read split partial ...passed 00:07:51.378 Test: blockdev reset ...passed 00:07:51.378 Test: blockdev write read 8 blocks ...passed 00:07:51.378 Test: blockdev write read size > 128k ...passed 00:07:51.378 Test: blockdev write read invalid size ...passed 00:07:51.378 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.378 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.378 Test: blockdev write read max offset ...passed 00:07:51.378 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.378 Test: blockdev writev readv 8 blocks ...passed 00:07:51.378 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.378 Test: blockdev writev readv block ...passed 00:07:51.378 Test: blockdev writev readv size > 128k ...passed 00:07:51.378 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.378 Test: blockdev comparev and writev ...passed 00:07:51.378 Test: blockdev nvme passthru rw ...passed 00:07:51.378 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.378 Test: blockdev nvme admin passthru ...passed 00:07:51.378 Test: blockdev copy ...passed 00:07:51.378 Suite: bdevio tests on: Malloc2p4 00:07:51.378 Test: blockdev write read block ...passed 00:07:51.378 Test: blockdev write zeroes read block ...passed 00:07:51.378 Test: blockdev write zeroes read no split ...passed 00:07:51.378 Test: blockdev write zeroes read split ...passed 00:07:51.378 Test: blockdev write zeroes read split partial ...passed 00:07:51.378 Test: blockdev reset ...passed 00:07:51.378 Test: blockdev write read 8 blocks ...passed 00:07:51.378 Test: blockdev write read size > 128k ...passed 00:07:51.378 Test: blockdev write read invalid size ...passed 00:07:51.378 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.378 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.378 Test: blockdev write read max offset ...passed 00:07:51.378 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.378 Test: blockdev writev readv 8 blocks ...passed 00:07:51.378 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.378 Test: blockdev writev readv block ...passed 00:07:51.378 Test: blockdev writev readv size > 128k ...passed 00:07:51.378 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.378 Test: blockdev comparev and writev ...passed 00:07:51.378 Test: blockdev nvme passthru rw ...passed 00:07:51.378 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.378 Test: blockdev nvme admin passthru ...passed 00:07:51.378 Test: blockdev copy ...passed 00:07:51.378 Suite: bdevio tests on: Malloc2p3 00:07:51.378 Test: blockdev write read block ...passed 00:07:51.378 Test: blockdev write zeroes read block ...passed 00:07:51.378 Test: blockdev write zeroes read no split ...passed 00:07:51.378 Test: blockdev write zeroes read split ...passed 00:07:51.378 Test: blockdev write zeroes read split partial ...passed 00:07:51.378 Test: blockdev reset ...passed 00:07:51.378 Test: blockdev write read 8 blocks ...passed 00:07:51.378 Test: blockdev write read size > 128k ...passed 00:07:51.378 Test: blockdev write read invalid size ...passed 00:07:51.378 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.378 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.378 Test: blockdev write read max offset ...passed 00:07:51.378 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.378 Test: blockdev writev readv 8 blocks ...passed 00:07:51.378 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.378 Test: blockdev writev readv block ...passed 00:07:51.378 Test: blockdev writev readv size > 128k ...passed 00:07:51.378 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.378 Test: blockdev comparev and writev ...passed 00:07:51.378 Test: blockdev nvme passthru rw ...passed 00:07:51.378 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.378 Test: blockdev nvme admin passthru ...passed 00:07:51.378 Test: blockdev copy ...passed 00:07:51.378 Suite: bdevio tests on: Malloc2p2 00:07:51.378 Test: blockdev write read block ...passed 00:07:51.378 Test: blockdev write zeroes read block ...passed 00:07:51.378 Test: blockdev write zeroes read no split ...passed 00:07:51.378 Test: blockdev write zeroes read split ...passed 00:07:51.378 Test: blockdev write zeroes read split partial ...passed 00:07:51.378 Test: blockdev reset ...passed 00:07:51.378 Test: blockdev write read 8 blocks ...passed 00:07:51.378 Test: blockdev write read size > 128k ...passed 00:07:51.378 Test: blockdev write read invalid size ...passed 00:07:51.378 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.378 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.378 Test: blockdev write read max offset ...passed 00:07:51.378 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.378 Test: blockdev writev readv 8 blocks ...passed 00:07:51.378 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.378 Test: blockdev writev readv block ...passed 00:07:51.378 Test: blockdev writev readv size > 128k ...passed 00:07:51.378 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.378 Test: blockdev comparev and writev ...passed 00:07:51.378 Test: blockdev nvme passthru rw ...passed 00:07:51.378 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.378 Test: blockdev nvme admin passthru ...passed 00:07:51.379 Test: blockdev copy ...passed 00:07:51.379 Suite: bdevio tests on: Malloc2p1 00:07:51.379 Test: blockdev write read block ...passed 00:07:51.379 Test: blockdev write zeroes read block ...passed 00:07:51.379 Test: blockdev write zeroes read no split ...passed 00:07:51.379 Test: blockdev write zeroes read split ...passed 00:07:51.637 Test: blockdev write zeroes read split partial ...passed 00:07:51.637 Test: blockdev reset ...passed 00:07:51.637 Test: blockdev write read 8 blocks ...passed 00:07:51.637 Test: blockdev write read size > 128k ...passed 00:07:51.637 Test: blockdev write read invalid size ...passed 00:07:51.637 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.637 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.637 Test: blockdev write read max offset ...passed 00:07:51.637 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.637 Test: blockdev writev readv 8 blocks ...passed 00:07:51.637 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.637 Test: blockdev writev readv block ...passed 00:07:51.637 Test: blockdev writev readv size > 128k ...passed 00:07:51.637 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.637 Test: blockdev comparev and writev ...passed 00:07:51.637 Test: blockdev nvme passthru rw ...passed 00:07:51.637 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.637 Test: blockdev nvme admin passthru ...passed 00:07:51.637 Test: blockdev copy ...passed 00:07:51.637 Suite: bdevio tests on: Malloc2p0 00:07:51.637 Test: blockdev write read block ...passed 00:07:51.637 Test: blockdev write zeroes read block ...passed 00:07:51.637 Test: blockdev write zeroes read no split ...passed 00:07:51.637 Test: blockdev write zeroes read split ...passed 00:07:51.637 Test: blockdev write zeroes read split partial ...passed 00:07:51.637 Test: blockdev reset ...passed 00:07:51.637 Test: blockdev write read 8 blocks ...passed 00:07:51.637 Test: blockdev write read size > 128k ...passed 00:07:51.637 Test: blockdev write read invalid size ...passed 00:07:51.637 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.637 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.637 Test: blockdev write read max offset ...passed 00:07:51.637 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.637 Test: blockdev writev readv 8 blocks ...passed 00:07:51.637 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.637 Test: blockdev writev readv block ...passed 00:07:51.637 Test: blockdev writev readv size > 128k ...passed 00:07:51.637 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.637 Test: blockdev comparev and writev ...passed 00:07:51.637 Test: blockdev nvme passthru rw ...passed 00:07:51.637 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.637 Test: blockdev nvme admin passthru ...passed 00:07:51.637 Test: blockdev copy ...passed 00:07:51.637 Suite: bdevio tests on: Malloc1p1 00:07:51.637 Test: blockdev write read block ...passed 00:07:51.637 Test: blockdev write zeroes read block ...passed 00:07:51.637 Test: blockdev write zeroes read no split ...passed 00:07:51.637 Test: blockdev write zeroes read split ...passed 00:07:51.637 Test: blockdev write zeroes read split partial ...passed 00:07:51.637 Test: blockdev reset ...passed 00:07:51.637 Test: blockdev write read 8 blocks ...passed 00:07:51.637 Test: blockdev write read size > 128k ...passed 00:07:51.637 Test: blockdev write read invalid size ...passed 00:07:51.637 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.637 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.637 Test: blockdev write read max offset ...passed 00:07:51.637 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.637 Test: blockdev writev readv 8 blocks ...passed 00:07:51.637 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.637 Test: blockdev writev readv block ...passed 00:07:51.637 Test: blockdev writev readv size > 128k ...passed 00:07:51.637 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.637 Test: blockdev comparev and writev ...passed 00:07:51.637 Test: blockdev nvme passthru rw ...passed 00:07:51.637 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.637 Test: blockdev nvme admin passthru ...passed 00:07:51.637 Test: blockdev copy ...passed 00:07:51.637 Suite: bdevio tests on: Malloc1p0 00:07:51.637 Test: blockdev write read block ...passed 00:07:51.637 Test: blockdev write zeroes read block ...passed 00:07:51.637 Test: blockdev write zeroes read no split ...passed 00:07:51.637 Test: blockdev write zeroes read split ...passed 00:07:51.637 Test: blockdev write zeroes read split partial ...passed 00:07:51.637 Test: blockdev reset ...passed 00:07:51.637 Test: blockdev write read 8 blocks ...passed 00:07:51.637 Test: blockdev write read size > 128k ...passed 00:07:51.637 Test: blockdev write read invalid size ...passed 00:07:51.637 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.637 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.637 Test: blockdev write read max offset ...passed 00:07:51.637 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.637 Test: blockdev writev readv 8 blocks ...passed 00:07:51.637 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.637 Test: blockdev writev readv block ...passed 00:07:51.637 Test: blockdev writev readv size > 128k ...passed 00:07:51.637 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.637 Test: blockdev comparev and writev ...passed 00:07:51.637 Test: blockdev nvme passthru rw ...passed 00:07:51.637 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.637 Test: blockdev nvme admin passthru ...passed 00:07:51.637 Test: blockdev copy ...passed 00:07:51.637 Suite: bdevio tests on: Malloc0 00:07:51.637 Test: blockdev write read block ...passed 00:07:51.637 Test: blockdev write zeroes read block ...passed 00:07:51.637 Test: blockdev write zeroes read no split ...passed 00:07:51.637 Test: blockdev write zeroes read split ...passed 00:07:51.637 Test: blockdev write zeroes read split partial ...passed 00:07:51.637 Test: blockdev reset ...passed 00:07:51.637 Test: blockdev write read 8 blocks ...passed 00:07:51.637 Test: blockdev write read size > 128k ...passed 00:07:51.637 Test: blockdev write read invalid size ...passed 00:07:51.637 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.637 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.637 Test: blockdev write read max offset ...passed 00:07:51.637 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.637 Test: blockdev writev readv 8 blocks ...passed 00:07:51.637 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.637 Test: blockdev writev readv block ...passed 00:07:51.637 Test: blockdev writev readv size > 128k ...passed 00:07:51.637 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.637 Test: blockdev comparev and writev ...passed 00:07:51.637 Test: blockdev nvme passthru rw ...passed 00:07:51.638 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.638 Test: blockdev nvme admin passthru ...passed 00:07:51.638 Test: blockdev copy ...passed 00:07:51.638 00:07:51.638 Run Summary: Type Total Ran Passed Failed Inactive 00:07:51.638 suites 16 16 n/a 0 0 00:07:51.638 tests 368 368 368 0 0 00:07:51.638 asserts 2224 2224 2224 0 n/a 00:07:51.638 00:07:51.638 Elapsed time = 0.658 seconds 00:07:51.638 0 00:07:51.638 18:22:37 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 2733036 00:07:51.638 18:22:37 blockdev_general.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 2733036 ']' 00:07:51.638 18:22:37 blockdev_general.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 2733036 00:07:51.638 18:22:37 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:07:51.638 18:22:37 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:51.638 18:22:37 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2733036 00:07:51.638 18:22:37 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:51.638 18:22:37 blockdev_general.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:51.638 18:22:37 blockdev_general.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2733036' 00:07:51.638 killing process with pid 2733036 00:07:51.638 18:22:37 blockdev_general.bdev_bounds -- common/autotest_common.sh@967 -- # kill 2733036 00:07:51.638 18:22:37 blockdev_general.bdev_bounds -- common/autotest_common.sh@972 -- # wait 2733036 00:07:51.896 18:22:37 blockdev_general.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:07:51.896 00:07:51.896 real 0m1.783s 00:07:51.896 user 0m4.674s 00:07:51.896 sys 0m0.419s 00:07:51.896 18:22:37 blockdev_general.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.896 18:22:37 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:51.896 ************************************ 00:07:51.896 END TEST bdev_bounds 00:07:51.896 ************************************ 00:07:51.896 18:22:37 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:07:51.896 18:22:37 blockdev_general -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:07:51.896 18:22:37 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:51.896 18:22:37 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.896 18:22:37 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:51.896 ************************************ 00:07:51.896 START TEST bdev_nbd 00:07:51.896 ************************************ 00:07:51.896 18:22:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:07:51.896 18:22:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:07:51.896 18:22:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:07:51.896 18:22:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.896 18:22:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:07:51.896 18:22:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:51.896 18:22:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:07:51.896 18:22:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=16 00:07:51.896 18:22:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:07:51.896 18:22:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:51.896 18:22:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:07:51.896 18:22:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=16 00:07:51.896 18:22:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:51.896 18:22:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:07:51.896 18:22:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:51.896 18:22:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:07:51.896 18:22:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=2733290 00:07:51.896 18:22:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:07:51.896 18:22:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:51.896 18:22:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 2733290 /var/tmp/spdk-nbd.sock 00:07:51.896 18:22:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 2733290 ']' 00:07:51.896 18:22:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:51.896 18:22:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:51.896 18:22:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:51.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:51.896 18:22:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:51.896 18:22:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:52.155 [2024-07-15 18:22:37.454538] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:52.155 [2024-07-15 18:22:37.454592] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.155 [2024-07-15 18:22:37.555315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.155 [2024-07-15 18:22:37.646127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.413 [2024-07-15 18:22:37.790836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:52.413 [2024-07-15 18:22:37.790898] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:52.413 [2024-07-15 18:22:37.790910] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:52.413 [2024-07-15 18:22:37.798843] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:52.413 [2024-07-15 18:22:37.798869] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:52.413 [2024-07-15 18:22:37.806855] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:52.413 [2024-07-15 18:22:37.806877] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:52.413 [2024-07-15 18:22:37.879139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:52.413 [2024-07-15 18:22:37.879189] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:52.413 [2024-07-15 18:22:37.879202] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13ee790 00:07:52.413 [2024-07-15 18:22:37.879211] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:52.413 [2024-07-15 18:22:37.880779] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:52.413 [2024-07-15 18:22:37.880806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:52.979 18:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:52.979 18:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:07:52.979 18:22:38 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:07:52.979 18:22:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.979 18:22:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:52.979 18:22:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:52.979 18:22:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:07:52.979 18:22:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.979 18:22:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:52.979 18:22:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:52.979 18:22:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:52.979 18:22:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:52.979 18:22:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:52.979 18:22:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:52.979 18:22:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:07:53.237 18:22:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:53.237 18:22:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:53.237 18:22:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:53.237 18:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:53.237 18:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:53.237 18:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:53.237 18:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:53.237 18:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:53.237 18:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:53.237 18:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:53.237 18:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:53.237 18:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.237 1+0 records in 00:07:53.237 1+0 records out 00:07:53.237 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196847 s, 20.8 MB/s 00:07:53.237 18:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.237 18:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:53.238 18:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.238 18:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:53.238 18:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:53.238 18:22:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:53.238 18:22:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:53.238 18:22:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:07:53.497 18:22:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:53.497 18:22:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:53.497 18:22:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:53.497 18:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:53.497 18:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:53.497 18:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:53.497 18:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:53.497 18:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:53.497 18:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:53.497 18:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:53.497 18:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:53.497 18:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.497 1+0 records in 00:07:53.497 1+0 records out 00:07:53.497 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211539 s, 19.4 MB/s 00:07:53.497 18:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.497 18:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:53.497 18:22:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.497 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:53.497 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:53.497 18:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:53.497 18:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:53.497 18:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:07:53.757 18:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:53.757 18:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:53.757 18:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:53.757 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:07:53.757 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:53.757 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:53.757 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:53.757 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:07:53.757 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:53.757 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:53.757 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:53.757 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.757 1+0 records in 00:07:53.757 1+0 records out 00:07:53.757 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229789 s, 17.8 MB/s 00:07:53.757 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.757 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:53.757 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.757 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:53.757 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:53.757 18:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:53.757 18:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:53.757 18:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:07:54.016 18:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:54.016 18:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:54.016 18:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:54.016 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:07:54.016 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:54.016 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:54.016 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:54.016 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:07:54.016 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:54.016 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:54.016 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:54.016 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:54.275 1+0 records in 00:07:54.275 1+0 records out 00:07:54.275 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025855 s, 15.8 MB/s 00:07:54.275 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.275 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:54.275 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.275 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:54.275 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:54.275 18:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:54.275 18:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:54.275 18:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:07:54.534 18:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:54.534 18:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:54.534 18:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:54.534 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:07:54.534 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:54.534 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:54.534 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:54.534 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:07:54.534 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:54.534 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:54.534 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:54.534 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:54.534 1+0 records in 00:07:54.534 1+0 records out 00:07:54.534 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027706 s, 14.8 MB/s 00:07:54.534 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.534 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:54.534 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.534 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:54.534 18:22:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:54.534 18:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:54.534 18:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:54.534 18:22:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:07:54.793 18:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:54.793 18:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:54.793 18:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:54.793 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:07:54.793 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:54.793 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:54.793 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:54.793 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:07:54.793 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:54.793 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:54.793 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:54.793 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:54.793 1+0 records in 00:07:54.793 1+0 records out 00:07:54.793 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315507 s, 13.0 MB/s 00:07:54.793 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.793 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:54.793 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.793 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:54.793 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:54.793 18:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:54.793 18:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:54.793 18:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:07:55.053 18:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:55.053 18:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:55.053 18:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:55.053 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:07:55.053 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:55.053 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:55.053 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:55.053 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:07:55.053 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:55.053 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:55.053 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:55.053 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:55.053 1+0 records in 00:07:55.053 1+0 records out 00:07:55.053 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282202 s, 14.5 MB/s 00:07:55.053 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:55.053 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:55.053 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:55.053 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:55.053 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:55.053 18:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:55.053 18:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:55.053 18:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:07:55.312 18:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:07:55.312 18:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:07:55.312 18:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:07:55.312 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:07:55.312 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:55.312 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:55.312 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:55.312 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:07:55.312 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:55.312 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:55.312 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:55.312 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:55.312 1+0 records in 00:07:55.312 1+0 records out 00:07:55.312 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277989 s, 14.7 MB/s 00:07:55.312 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:55.312 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:55.312 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:55.312 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:55.312 18:22:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:55.312 18:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:55.312 18:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:55.312 18:22:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:07:55.588 18:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:07:55.588 18:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:07:55.588 18:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:07:55.588 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:07:55.588 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:55.588 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:55.589 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:55.589 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:07:55.589 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:55.589 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:55.589 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:55.589 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:55.589 1+0 records in 00:07:55.589 1+0 records out 00:07:55.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328125 s, 12.5 MB/s 00:07:55.589 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:55.589 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:55.589 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:55.589 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:55.589 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:55.589 18:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:55.589 18:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:55.589 18:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:07:55.847 18:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:07:55.847 18:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:07:55.847 18:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:07:55.847 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:07:55.847 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:55.847 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:55.847 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:55.847 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:07:55.847 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:55.847 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:55.847 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:55.847 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:55.847 1+0 records in 00:07:55.847 1+0 records out 00:07:55.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347834 s, 11.8 MB/s 00:07:55.847 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:55.847 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:55.847 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:55.847 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:55.847 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:55.847 18:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:55.847 18:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:55.847 18:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:07:56.106 18:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:07:56.106 18:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:07:56.106 18:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:07:56.106 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:07:56.106 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:56.106 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:56.106 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:56.106 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:07:56.106 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:56.106 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:56.106 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:56.106 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:56.106 1+0 records in 00:07:56.106 1+0 records out 00:07:56.106 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384886 s, 10.6 MB/s 00:07:56.106 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:56.106 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:56.106 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:56.106 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:56.106 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:56.106 18:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:56.106 18:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:56.106 18:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:07:56.421 18:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:07:56.421 18:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:07:56.421 18:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:07:56.421 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:07:56.421 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:56.421 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:56.421 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:56.421 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:07:56.421 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:56.421 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:56.421 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:56.421 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:56.421 1+0 records in 00:07:56.421 1+0 records out 00:07:56.421 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400723 s, 10.2 MB/s 00:07:56.421 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:56.421 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:56.421 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:56.421 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:56.422 18:22:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:56.422 18:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:56.422 18:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:56.422 18:22:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:07:56.701 18:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:07:56.701 18:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:07:56.701 18:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:07:56.701 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:07:56.701 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:56.701 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:56.701 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:56.701 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:07:56.701 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:56.701 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:56.701 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:56.701 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:56.701 1+0 records in 00:07:56.701 1+0 records out 00:07:56.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427344 s, 9.6 MB/s 00:07:56.701 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:56.701 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:56.701 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:56.701 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:56.701 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:56.701 18:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:56.701 18:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:56.701 18:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:07:56.960 18:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:07:56.960 18:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:07:56.960 18:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:07:56.960 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:07:56.960 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:56.960 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:56.960 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:56.960 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:07:56.960 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:56.960 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:56.960 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:56.960 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:56.960 1+0 records in 00:07:56.960 1+0 records out 00:07:56.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394989 s, 10.4 MB/s 00:07:56.960 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:56.960 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:56.960 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:56.960 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:56.960 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:56.960 18:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:56.960 18:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:56.960 18:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:07:57.527 18:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:07:57.527 18:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:07:57.527 18:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:07:57.527 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:07:57.527 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:57.527 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:57.527 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:57.527 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:07:57.527 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:57.527 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:57.527 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:57.527 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.527 1+0 records in 00:07:57.527 1+0 records out 00:07:57.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440772 s, 9.3 MB/s 00:07:57.527 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:57.527 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:57.527 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:57.527 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:57.527 18:22:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:57.527 18:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:57.527 18:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:57.527 18:22:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:07:57.527 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:07:57.527 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:07:57.527 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:07:57.527 18:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:07:57.527 18:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:57.527 18:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:57.527 18:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:57.527 18:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:07:57.786 18:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:57.786 18:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:57.786 18:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:57.786 18:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.786 1+0 records in 00:07:57.786 1+0 records out 00:07:57.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467365 s, 8.8 MB/s 00:07:57.786 18:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:57.786 18:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:57.786 18:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:57.786 18:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:57.786 18:22:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:57.786 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:57.786 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:57.786 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:58.044 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:58.044 { 00:07:58.044 "nbd_device": "/dev/nbd0", 00:07:58.044 "bdev_name": "Malloc0" 00:07:58.044 }, 00:07:58.044 { 00:07:58.044 "nbd_device": "/dev/nbd1", 00:07:58.044 "bdev_name": "Malloc1p0" 00:07:58.044 }, 00:07:58.044 { 00:07:58.044 "nbd_device": "/dev/nbd2", 00:07:58.044 "bdev_name": "Malloc1p1" 00:07:58.044 }, 00:07:58.044 { 00:07:58.044 "nbd_device": "/dev/nbd3", 00:07:58.044 "bdev_name": "Malloc2p0" 00:07:58.044 }, 00:07:58.044 { 00:07:58.044 "nbd_device": "/dev/nbd4", 00:07:58.044 "bdev_name": "Malloc2p1" 00:07:58.044 }, 00:07:58.044 { 00:07:58.044 "nbd_device": "/dev/nbd5", 00:07:58.044 "bdev_name": "Malloc2p2" 00:07:58.044 }, 00:07:58.044 { 00:07:58.044 "nbd_device": "/dev/nbd6", 00:07:58.044 "bdev_name": "Malloc2p3" 00:07:58.044 }, 00:07:58.044 { 00:07:58.044 "nbd_device": "/dev/nbd7", 00:07:58.044 "bdev_name": "Malloc2p4" 00:07:58.044 }, 00:07:58.044 { 00:07:58.044 "nbd_device": "/dev/nbd8", 00:07:58.044 "bdev_name": "Malloc2p5" 00:07:58.044 }, 00:07:58.044 { 00:07:58.044 "nbd_device": "/dev/nbd9", 00:07:58.044 "bdev_name": "Malloc2p6" 00:07:58.044 }, 00:07:58.044 { 00:07:58.044 "nbd_device": "/dev/nbd10", 00:07:58.044 "bdev_name": "Malloc2p7" 00:07:58.044 }, 00:07:58.044 { 00:07:58.044 "nbd_device": "/dev/nbd11", 00:07:58.044 "bdev_name": "TestPT" 00:07:58.044 }, 00:07:58.044 { 00:07:58.044 "nbd_device": "/dev/nbd12", 00:07:58.044 "bdev_name": "raid0" 00:07:58.044 }, 00:07:58.044 { 00:07:58.044 "nbd_device": "/dev/nbd13", 00:07:58.044 "bdev_name": "concat0" 00:07:58.044 }, 00:07:58.045 { 00:07:58.045 "nbd_device": "/dev/nbd14", 00:07:58.045 "bdev_name": "raid1" 00:07:58.045 }, 00:07:58.045 { 00:07:58.045 "nbd_device": "/dev/nbd15", 00:07:58.045 "bdev_name": "AIO0" 00:07:58.045 } 00:07:58.045 ]' 00:07:58.045 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:58.045 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:58.045 { 00:07:58.045 "nbd_device": "/dev/nbd0", 00:07:58.045 "bdev_name": "Malloc0" 00:07:58.045 }, 00:07:58.045 { 00:07:58.045 "nbd_device": "/dev/nbd1", 00:07:58.045 "bdev_name": "Malloc1p0" 00:07:58.045 }, 00:07:58.045 { 00:07:58.045 "nbd_device": "/dev/nbd2", 00:07:58.045 "bdev_name": "Malloc1p1" 00:07:58.045 }, 00:07:58.045 { 00:07:58.045 "nbd_device": "/dev/nbd3", 00:07:58.045 "bdev_name": "Malloc2p0" 00:07:58.045 }, 00:07:58.045 { 00:07:58.045 "nbd_device": "/dev/nbd4", 00:07:58.045 "bdev_name": "Malloc2p1" 00:07:58.045 }, 00:07:58.045 { 00:07:58.045 "nbd_device": "/dev/nbd5", 00:07:58.045 "bdev_name": "Malloc2p2" 00:07:58.045 }, 00:07:58.045 { 00:07:58.045 "nbd_device": "/dev/nbd6", 00:07:58.045 "bdev_name": "Malloc2p3" 00:07:58.045 }, 00:07:58.045 { 00:07:58.045 "nbd_device": "/dev/nbd7", 00:07:58.045 "bdev_name": "Malloc2p4" 00:07:58.045 }, 00:07:58.045 { 00:07:58.045 "nbd_device": "/dev/nbd8", 00:07:58.045 "bdev_name": "Malloc2p5" 00:07:58.045 }, 00:07:58.045 { 00:07:58.045 "nbd_device": "/dev/nbd9", 00:07:58.045 "bdev_name": "Malloc2p6" 00:07:58.045 }, 00:07:58.045 { 00:07:58.045 "nbd_device": "/dev/nbd10", 00:07:58.045 "bdev_name": "Malloc2p7" 00:07:58.045 }, 00:07:58.045 { 00:07:58.045 "nbd_device": "/dev/nbd11", 00:07:58.045 "bdev_name": "TestPT" 00:07:58.045 }, 00:07:58.045 { 00:07:58.045 "nbd_device": "/dev/nbd12", 00:07:58.045 "bdev_name": "raid0" 00:07:58.045 }, 00:07:58.045 { 00:07:58.045 "nbd_device": "/dev/nbd13", 00:07:58.045 "bdev_name": "concat0" 00:07:58.045 }, 00:07:58.045 { 00:07:58.045 "nbd_device": "/dev/nbd14", 00:07:58.045 "bdev_name": "raid1" 00:07:58.045 }, 00:07:58.045 { 00:07:58.045 "nbd_device": "/dev/nbd15", 00:07:58.045 "bdev_name": "AIO0" 00:07:58.045 } 00:07:58.045 ]' 00:07:58.045 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:58.045 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:07:58.045 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:58.045 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:07:58.045 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:58.045 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:58.045 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.045 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:58.303 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:58.303 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:58.303 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:58.303 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:58.303 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:58.303 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:58.303 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:58.303 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:58.303 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.303 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:58.561 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:58.561 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:58.561 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:58.561 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:58.561 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:58.561 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:58.561 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:58.561 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:58.561 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.561 18:22:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:58.820 18:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:58.820 18:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:58.820 18:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:58.820 18:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:58.820 18:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:58.820 18:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:58.820 18:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:58.820 18:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:58.820 18:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.820 18:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:59.078 18:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:59.078 18:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:59.078 18:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:59.078 18:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.078 18:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.078 18:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:59.078 18:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:59.078 18:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.078 18:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.078 18:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:59.336 18:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:59.337 18:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:59.337 18:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:59.337 18:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.337 18:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.337 18:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:59.337 18:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:59.337 18:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.337 18:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.337 18:22:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:59.595 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:59.595 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:59.595 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:59.595 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.595 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.595 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:59.595 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:59.595 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.595 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.595 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:59.853 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:59.853 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:59.853 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:59.853 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.853 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.853 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:59.853 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:59.853 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.853 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.853 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:08:00.111 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:08:00.111 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:08:00.111 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:08:00.111 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.111 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.111 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:08:00.111 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.111 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.111 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.111 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:08:00.678 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:08:00.678 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:08:00.678 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:08:00.678 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.678 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.678 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:08:00.678 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.678 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.678 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.678 18:22:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:08:00.678 18:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:08:00.678 18:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:08:00.678 18:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:08:00.678 18:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.678 18:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.678 18:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:08:00.678 18:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.678 18:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.679 18:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.679 18:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:00.937 18:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:00.937 18:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:00.937 18:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:00.937 18:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.937 18:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.937 18:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:00.937 18:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.937 18:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.937 18:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.937 18:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:01.505 18:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:01.505 18:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:01.505 18:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:01.505 18:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.505 18:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.505 18:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:01.505 18:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.505 18:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.505 18:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.505 18:22:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:01.505 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:01.505 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:01.505 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:01.505 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.505 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.505 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:01.505 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.505 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.505 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.505 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:01.764 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:02.023 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:02.023 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:02.023 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.023 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.023 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:02.023 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.023 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.023 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.023 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:02.282 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:02.282 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:02.282 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:02.282 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.282 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.282 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:02.282 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.282 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.282 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.282 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:08:02.540 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:08:02.540 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:08:02.540 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:08:02.540 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.540 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.540 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:08:02.540 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.540 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.540 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:02.540 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.540 18:22:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:02.799 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:02.799 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:02.799 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:02.799 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:02.799 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:02.799 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:02.799 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:02.799 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:02.799 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:02.799 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:02.799 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:02.799 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:02.799 18:22:48 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:08:02.799 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.799 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:08:02.799 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:02.799 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:02.799 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:02.799 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:08:02.799 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.799 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:08:02.799 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:02.799 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:02.799 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:02.799 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:02.799 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:02.799 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:02.799 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:03.059 /dev/nbd0 00:08:03.059 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:03.059 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:03.059 18:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:03.059 18:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:03.059 18:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:03.059 18:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:03.059 18:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:03.059 18:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:03.059 18:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:03.059 18:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:03.059 18:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.059 1+0 records in 00:08:03.059 1+0 records out 00:08:03.059 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235744 s, 17.4 MB/s 00:08:03.059 18:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:03.059 18:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:03.059 18:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:03.059 18:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:03.059 18:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:03.059 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:03.059 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:03.059 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:08:03.318 /dev/nbd1 00:08:03.318 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:03.318 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:03.318 18:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:03.318 18:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:03.318 18:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:03.318 18:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:03.318 18:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:03.318 18:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:03.318 18:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:03.318 18:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:03.318 18:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.318 1+0 records in 00:08:03.318 1+0 records out 00:08:03.318 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231566 s, 17.7 MB/s 00:08:03.318 18:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:03.318 18:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:03.318 18:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:03.318 18:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:03.318 18:22:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:03.318 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:03.318 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:03.318 18:22:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:08:03.577 /dev/nbd10 00:08:03.577 18:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:03.577 18:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:03.577 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:08:03.577 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:03.577 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:03.577 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:03.577 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:08:03.577 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:03.577 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:03.577 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:03.577 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.577 1+0 records in 00:08:03.577 1+0 records out 00:08:03.577 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263652 s, 15.5 MB/s 00:08:03.577 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:03.577 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:03.577 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:03.577 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:03.577 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:03.577 18:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:03.577 18:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:03.577 18:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:08:03.836 /dev/nbd11 00:08:03.836 18:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:03.836 18:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:03.836 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:08:03.836 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:03.836 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:03.836 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:03.836 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:08:03.836 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:03.836 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:03.836 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:03.836 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.836 1+0 records in 00:08:03.836 1+0 records out 00:08:03.836 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250555 s, 16.3 MB/s 00:08:03.836 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:03.836 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:03.836 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:03.836 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:03.836 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:03.836 18:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:03.836 18:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:03.836 18:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:08:04.095 /dev/nbd12 00:08:04.354 18:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:04.354 18:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:04.354 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:08:04.354 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:04.354 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:04.354 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:04.354 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:08:04.354 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:04.354 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:04.354 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:04.354 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.354 1+0 records in 00:08:04.354 1+0 records out 00:08:04.354 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259708 s, 15.8 MB/s 00:08:04.354 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:04.354 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:04.354 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:04.354 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:04.354 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:04.354 18:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:04.354 18:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:04.354 18:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:08:04.613 /dev/nbd13 00:08:04.613 18:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:04.613 18:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:04.613 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:08:04.613 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:04.613 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:04.613 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:04.613 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:08:04.613 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:04.613 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:04.613 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:04.613 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.613 1+0 records in 00:08:04.613 1+0 records out 00:08:04.613 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309926 s, 13.2 MB/s 00:08:04.613 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:04.613 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:04.613 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:04.613 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:04.613 18:22:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:04.613 18:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:04.613 18:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:04.613 18:22:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:08:04.872 /dev/nbd14 00:08:04.872 18:22:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:04.872 18:22:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:04.872 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:08:04.872 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:04.872 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:04.872 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:04.872 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:08:04.872 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:04.872 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:04.872 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:04.872 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.872 1+0 records in 00:08:04.872 1+0 records out 00:08:04.872 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312234 s, 13.1 MB/s 00:08:04.872 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:04.872 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:04.872 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:04.872 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:04.872 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:04.872 18:22:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:04.872 18:22:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:04.872 18:22:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:08:05.132 /dev/nbd15 00:08:05.132 18:22:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:08:05.132 18:22:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:08:05.132 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:08:05.132 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:05.132 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:05.132 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:05.132 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:08:05.132 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:05.132 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:05.132 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:05.132 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.132 1+0 records in 00:08:05.132 1+0 records out 00:08:05.132 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331631 s, 12.4 MB/s 00:08:05.132 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:05.132 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:05.132 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:05.132 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:05.132 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:05.132 18:22:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.132 18:22:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:05.132 18:22:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:08:05.391 /dev/nbd2 00:08:05.391 18:22:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:08:05.391 18:22:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:08:05.391 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:08:05.391 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:05.391 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:05.391 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:05.391 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:08:05.391 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:05.391 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:05.391 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:05.391 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.391 1+0 records in 00:08:05.391 1+0 records out 00:08:05.391 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314997 s, 13.0 MB/s 00:08:05.391 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:05.391 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:05.391 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:05.391 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:05.391 18:22:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:05.391 18:22:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.391 18:22:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:05.391 18:22:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:08:05.650 /dev/nbd3 00:08:05.650 18:22:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:08:05.650 18:22:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:08:05.650 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:08:05.650 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:05.650 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:05.650 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:05.650 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:08:05.650 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:05.650 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:05.650 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:05.650 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.650 1+0 records in 00:08:05.650 1+0 records out 00:08:05.650 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387388 s, 10.6 MB/s 00:08:05.650 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:05.650 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:05.651 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:05.651 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:05.651 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:05.651 18:22:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.651 18:22:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:05.651 18:22:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:08:05.910 /dev/nbd4 00:08:05.910 18:22:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:08:05.910 18:22:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:08:05.910 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:08:05.910 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:05.910 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:05.910 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:05.910 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:08:05.910 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:05.910 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:05.910 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:05.910 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.910 1+0 records in 00:08:05.910 1+0 records out 00:08:05.910 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425529 s, 9.6 MB/s 00:08:06.170 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:06.170 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:06.170 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:06.170 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:06.170 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:06.170 18:22:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:06.170 18:22:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:06.170 18:22:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:08:06.429 /dev/nbd5 00:08:06.429 18:22:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:08:06.429 18:22:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:08:06.429 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:08:06.429 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:06.429 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:06.429 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:06.429 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:08:06.429 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:06.429 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:06.429 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:06.429 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:06.429 1+0 records in 00:08:06.429 1+0 records out 00:08:06.429 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404979 s, 10.1 MB/s 00:08:06.429 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:06.429 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:06.429 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:06.429 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:06.429 18:22:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:06.429 18:22:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:06.429 18:22:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:06.429 18:22:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:08:06.688 /dev/nbd6 00:08:06.688 18:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:08:06.688 18:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:08:06.688 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:08:06.688 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:06.688 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:06.688 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:06.688 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:08:06.688 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:06.688 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:06.688 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:06.688 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:06.688 1+0 records in 00:08:06.688 1+0 records out 00:08:06.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449102 s, 9.1 MB/s 00:08:06.688 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:06.688 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:06.688 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:06.688 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:06.688 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:06.689 18:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:06.689 18:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:06.689 18:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:08:06.948 /dev/nbd7 00:08:06.948 18:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:08:06.948 18:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:08:06.948 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:08:06.948 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:06.948 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:06.948 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:06.948 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:08:06.948 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:06.948 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:06.948 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:06.948 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:06.948 1+0 records in 00:08:06.948 1+0 records out 00:08:06.948 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000906331 s, 4.5 MB/s 00:08:06.948 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:06.948 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:06.948 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:06.948 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:06.948 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:06.948 18:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:06.948 18:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:06.948 18:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:08:07.207 /dev/nbd8 00:08:07.207 18:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:08:07.207 18:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:08:07.207 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:08:07.207 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:07.207 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:07.207 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:07.207 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:08:07.207 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:07.207 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:07.207 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:07.207 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.207 1+0 records in 00:08:07.207 1+0 records out 00:08:07.207 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000604032 s, 6.8 MB/s 00:08:07.207 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:07.207 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:07.207 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:07.207 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:07.207 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:07.207 18:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.207 18:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:07.207 18:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:08:07.467 /dev/nbd9 00:08:07.467 18:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:08:07.467 18:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:08:07.467 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:08:07.467 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:07.467 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:07.467 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:07.467 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:08:07.467 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:07.467 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:07.467 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:07.467 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.467 1+0 records in 00:08:07.467 1+0 records out 00:08:07.467 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000920188 s, 4.5 MB/s 00:08:07.467 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:07.467 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:07.467 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:07.467 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:07.467 18:22:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:07.467 18:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.467 18:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:07.467 18:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:07.467 18:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:07.467 18:22:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:07.727 18:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:07.727 { 00:08:07.727 "nbd_device": "/dev/nbd0", 00:08:07.727 "bdev_name": "Malloc0" 00:08:07.727 }, 00:08:07.727 { 00:08:07.727 "nbd_device": "/dev/nbd1", 00:08:07.727 "bdev_name": "Malloc1p0" 00:08:07.727 }, 00:08:07.727 { 00:08:07.727 "nbd_device": "/dev/nbd10", 00:08:07.727 "bdev_name": "Malloc1p1" 00:08:07.727 }, 00:08:07.727 { 00:08:07.727 "nbd_device": "/dev/nbd11", 00:08:07.727 "bdev_name": "Malloc2p0" 00:08:07.727 }, 00:08:07.727 { 00:08:07.727 "nbd_device": "/dev/nbd12", 00:08:07.727 "bdev_name": "Malloc2p1" 00:08:07.727 }, 00:08:07.727 { 00:08:07.727 "nbd_device": "/dev/nbd13", 00:08:07.727 "bdev_name": "Malloc2p2" 00:08:07.727 }, 00:08:07.727 { 00:08:07.727 "nbd_device": "/dev/nbd14", 00:08:07.727 "bdev_name": "Malloc2p3" 00:08:07.727 }, 00:08:07.727 { 00:08:07.727 "nbd_device": "/dev/nbd15", 00:08:07.727 "bdev_name": "Malloc2p4" 00:08:07.727 }, 00:08:07.727 { 00:08:07.727 "nbd_device": "/dev/nbd2", 00:08:07.727 "bdev_name": "Malloc2p5" 00:08:07.727 }, 00:08:07.727 { 00:08:07.727 "nbd_device": "/dev/nbd3", 00:08:07.727 "bdev_name": "Malloc2p6" 00:08:07.727 }, 00:08:07.727 { 00:08:07.727 "nbd_device": "/dev/nbd4", 00:08:07.727 "bdev_name": "Malloc2p7" 00:08:07.727 }, 00:08:07.727 { 00:08:07.727 "nbd_device": "/dev/nbd5", 00:08:07.727 "bdev_name": "TestPT" 00:08:07.727 }, 00:08:07.727 { 00:08:07.727 "nbd_device": "/dev/nbd6", 00:08:07.727 "bdev_name": "raid0" 00:08:07.727 }, 00:08:07.727 { 00:08:07.727 "nbd_device": "/dev/nbd7", 00:08:07.727 "bdev_name": "concat0" 00:08:07.727 }, 00:08:07.727 { 00:08:07.727 "nbd_device": "/dev/nbd8", 00:08:07.727 "bdev_name": "raid1" 00:08:07.727 }, 00:08:07.727 { 00:08:07.727 "nbd_device": "/dev/nbd9", 00:08:07.727 "bdev_name": "AIO0" 00:08:07.727 } 00:08:07.727 ]' 00:08:07.727 18:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:07.727 { 00:08:07.727 "nbd_device": "/dev/nbd0", 00:08:07.727 "bdev_name": "Malloc0" 00:08:07.727 }, 00:08:07.727 { 00:08:07.727 "nbd_device": "/dev/nbd1", 00:08:07.727 "bdev_name": "Malloc1p0" 00:08:07.727 }, 00:08:07.727 { 00:08:07.727 "nbd_device": "/dev/nbd10", 00:08:07.727 "bdev_name": "Malloc1p1" 00:08:07.727 }, 00:08:07.727 { 00:08:07.727 "nbd_device": "/dev/nbd11", 00:08:07.727 "bdev_name": "Malloc2p0" 00:08:07.727 }, 00:08:07.727 { 00:08:07.727 "nbd_device": "/dev/nbd12", 00:08:07.727 "bdev_name": "Malloc2p1" 00:08:07.727 }, 00:08:07.727 { 00:08:07.727 "nbd_device": "/dev/nbd13", 00:08:07.727 "bdev_name": "Malloc2p2" 00:08:07.727 }, 00:08:07.727 { 00:08:07.727 "nbd_device": "/dev/nbd14", 00:08:07.727 "bdev_name": "Malloc2p3" 00:08:07.727 }, 00:08:07.727 { 00:08:07.727 "nbd_device": "/dev/nbd15", 00:08:07.727 "bdev_name": "Malloc2p4" 00:08:07.727 }, 00:08:07.727 { 00:08:07.727 "nbd_device": "/dev/nbd2", 00:08:07.727 "bdev_name": "Malloc2p5" 00:08:07.727 }, 00:08:07.727 { 00:08:07.727 "nbd_device": "/dev/nbd3", 00:08:07.727 "bdev_name": "Malloc2p6" 00:08:07.727 }, 00:08:07.727 { 00:08:07.727 "nbd_device": "/dev/nbd4", 00:08:07.727 "bdev_name": "Malloc2p7" 00:08:07.727 }, 00:08:07.727 { 00:08:07.727 "nbd_device": "/dev/nbd5", 00:08:07.727 "bdev_name": "TestPT" 00:08:07.727 }, 00:08:07.727 { 00:08:07.727 "nbd_device": "/dev/nbd6", 00:08:07.727 "bdev_name": "raid0" 00:08:07.727 }, 00:08:07.727 { 00:08:07.727 "nbd_device": "/dev/nbd7", 00:08:07.727 "bdev_name": "concat0" 00:08:07.727 }, 00:08:07.727 { 00:08:07.727 "nbd_device": "/dev/nbd8", 00:08:07.727 "bdev_name": "raid1" 00:08:07.727 }, 00:08:07.727 { 00:08:07.727 "nbd_device": "/dev/nbd9", 00:08:07.727 "bdev_name": "AIO0" 00:08:07.727 } 00:08:07.727 ]' 00:08:07.727 18:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:07.986 18:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:07.986 /dev/nbd1 00:08:07.986 /dev/nbd10 00:08:07.986 /dev/nbd11 00:08:07.986 /dev/nbd12 00:08:07.986 /dev/nbd13 00:08:07.986 /dev/nbd14 00:08:07.986 /dev/nbd15 00:08:07.986 /dev/nbd2 00:08:07.986 /dev/nbd3 00:08:07.986 /dev/nbd4 00:08:07.986 /dev/nbd5 00:08:07.986 /dev/nbd6 00:08:07.986 /dev/nbd7 00:08:07.986 /dev/nbd8 00:08:07.986 /dev/nbd9' 00:08:07.986 18:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:07.986 /dev/nbd1 00:08:07.986 /dev/nbd10 00:08:07.986 /dev/nbd11 00:08:07.986 /dev/nbd12 00:08:07.986 /dev/nbd13 00:08:07.986 /dev/nbd14 00:08:07.986 /dev/nbd15 00:08:07.986 /dev/nbd2 00:08:07.986 /dev/nbd3 00:08:07.986 /dev/nbd4 00:08:07.986 /dev/nbd5 00:08:07.986 /dev/nbd6 00:08:07.986 /dev/nbd7 00:08:07.986 /dev/nbd8 00:08:07.986 /dev/nbd9' 00:08:07.986 18:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:07.986 18:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16 00:08:07.986 18:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16 00:08:07.986 18:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16 00:08:07.986 18:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:08:07.986 18:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:08:07.986 18:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:07.986 18:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:07.986 18:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:07.986 18:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:08:07.986 18:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:07.986 18:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:07.986 256+0 records in 00:08:07.986 256+0 records out 00:08:07.986 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103528 s, 101 MB/s 00:08:07.986 18:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:07.986 18:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:08.245 256+0 records in 00:08:08.245 256+0 records out 00:08:08.245 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.222411 s, 4.7 MB/s 00:08:08.245 18:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:08.245 18:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:08.245 256+0 records in 00:08:08.245 256+0 records out 00:08:08.245 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.226439 s, 4.6 MB/s 00:08:08.245 18:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:08.245 18:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:08.504 256+0 records in 00:08:08.504 256+0 records out 00:08:08.504 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.192351 s, 5.5 MB/s 00:08:08.504 18:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:08.504 18:22:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:08.763 256+0 records in 00:08:08.763 256+0 records out 00:08:08.763 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.175317 s, 6.0 MB/s 00:08:08.763 18:22:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:08.763 18:22:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:08.763 256+0 records in 00:08:08.763 256+0 records out 00:08:08.763 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134085 s, 7.8 MB/s 00:08:08.763 18:22:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:08.763 18:22:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:09.022 256+0 records in 00:08:09.022 256+0 records out 00:08:09.022 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150658 s, 7.0 MB/s 00:08:09.022 18:22:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.022 18:22:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:09.281 256+0 records in 00:08:09.281 256+0 records out 00:08:09.281 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.227106 s, 4.6 MB/s 00:08:09.281 18:22:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.281 18:22:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:08:09.540 256+0 records in 00:08:09.540 256+0 records out 00:08:09.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.225421 s, 4.7 MB/s 00:08:09.540 18:22:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.540 18:22:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:08:09.799 256+0 records in 00:08:09.799 256+0 records out 00:08:09.799 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.220713 s, 4.8 MB/s 00:08:09.799 18:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.799 18:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:08:10.058 256+0 records in 00:08:10.058 256+0 records out 00:08:10.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.223645 s, 4.7 MB/s 00:08:10.058 18:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:10.058 18:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:08:10.058 256+0 records in 00:08:10.058 256+0 records out 00:08:10.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.177795 s, 5.9 MB/s 00:08:10.058 18:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:10.058 18:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:08:10.315 256+0 records in 00:08:10.315 256+0 records out 00:08:10.315 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12261 s, 8.6 MB/s 00:08:10.315 18:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:10.315 18:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:08:10.315 256+0 records in 00:08:10.315 256+0 records out 00:08:10.315 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146287 s, 7.2 MB/s 00:08:10.315 18:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:10.315 18:22:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:08:10.574 256+0 records in 00:08:10.574 256+0 records out 00:08:10.574 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.2261 s, 4.6 MB/s 00:08:10.574 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:10.574 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:08:10.833 256+0 records in 00:08:10.833 256+0 records out 00:08:10.833 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.239501 s, 4.4 MB/s 00:08:10.833 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:10.833 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:08:11.092 256+0 records in 00:08:11.092 256+0 records out 00:08:11.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.221884 s, 4.7 MB/s 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd15 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd2 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd3 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd4 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd5 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd6 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.092 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd7 00:08:11.350 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.350 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd8 00:08:11.350 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.350 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd9 00:08:11.350 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:08:11.350 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:08:11.350 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.350 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:11.350 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:11.350 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:11.350 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.350 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:11.608 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:11.608 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:11.608 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:11.608 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.608 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.608 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:11.608 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.608 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.608 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.608 18:22:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:11.866 18:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:11.866 18:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:11.866 18:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:11.866 18:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.866 18:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.866 18:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:11.866 18:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.866 18:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.866 18:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.866 18:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:12.124 18:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:12.124 18:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:12.124 18:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:12.124 18:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.124 18:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.124 18:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:12.124 18:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.124 18:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.124 18:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.124 18:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:12.381 18:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:12.381 18:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:12.381 18:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:12.381 18:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.381 18:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.381 18:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:12.381 18:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.381 18:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.381 18:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.381 18:22:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:12.668 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:12.668 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:12.668 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:12.668 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.668 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.669 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:12.669 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.669 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.669 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.669 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:12.926 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:12.926 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:12.926 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:12.926 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.926 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.926 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:12.926 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.926 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.926 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.926 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:13.184 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:13.184 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:13.184 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:13.184 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.184 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.184 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:13.184 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:13.184 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.184 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:13.184 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:08:13.443 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:08:13.443 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:08:13.443 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:08:13.443 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.443 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.443 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:08:13.443 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:13.443 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.443 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:13.443 18:22:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:13.702 18:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:13.702 18:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:13.702 18:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:13.702 18:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.702 18:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.702 18:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:13.702 18:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:13.702 18:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.702 18:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:13.702 18:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:13.961 18:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:13.961 18:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:13.961 18:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:13.961 18:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:14.220 18:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:14.220 18:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:14.220 18:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:14.220 18:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:14.220 18:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:14.220 18:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:14.478 18:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:14.479 18:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:14.479 18:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:14.479 18:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:14.479 18:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:14.479 18:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:14.479 18:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:14.479 18:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:14.479 18:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:14.479 18:22:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:14.738 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:14.738 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:14.738 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:14.738 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:14.738 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:14.738 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:14.738 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:14.738 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:14.738 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:14.738 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:14.997 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:14.997 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:14.997 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:14.997 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:14.997 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:14.997 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:14.997 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:14.997 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:14.997 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:14.997 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:08:15.256 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:08:15.256 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:08:15.256 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:08:15.256 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:15.256 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:15.256 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:08:15.256 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:15.256 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:15.256 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:15.256 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:08:15.514 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:08:15.514 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:08:15.514 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:08:15.514 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:15.514 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:15.514 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:08:15.514 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:15.514 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:15.514 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:15.515 18:23:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:08:15.772 18:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:08:15.772 18:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:08:15.772 18:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:08:15.772 18:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:15.772 18:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:15.772 18:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:08:15.772 18:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:15.772 18:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:15.772 18:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:15.772 18:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:15.772 18:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:16.031 18:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:16.031 18:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:16.031 18:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:16.031 18:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:16.031 18:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:16.031 18:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:16.031 18:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:16.031 18:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:16.031 18:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:16.031 18:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:16.031 18:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:16.032 18:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:16.032 18:23:01 blockdev_general.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:08:16.032 18:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:16.032 18:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:16.032 18:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:16.032 18:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:16.032 18:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:16.290 malloc_lvol_verify 00:08:16.290 18:23:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:16.548 4f27cb47-ab5f-4f55-b73a-a37f1203d341 00:08:16.548 18:23:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:16.806 78bb2144-83b0-4e68-9b03-17a36766bfb6 00:08:16.806 18:23:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:17.064 /dev/nbd0 00:08:17.323 18:23:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:17.323 mke2fs 1.46.5 (30-Dec-2021) 00:08:17.323 Discarding device blocks: 0/4096 done 00:08:17.323 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:17.323 00:08:17.323 Allocating group tables: 0/1 done 00:08:17.323 Writing inode tables: 0/1 done 00:08:17.323 Creating journal (1024 blocks): done 00:08:17.323 Writing superblocks and filesystem accounting information: 0/1 done 00:08:17.323 00:08:17.323 18:23:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:17.323 18:23:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:17.323 18:23:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:17.323 18:23:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:17.323 18:23:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:17.323 18:23:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:17.323 18:23:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:17.323 18:23:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:17.583 18:23:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:17.583 18:23:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:17.583 18:23:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:17.583 18:23:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:17.583 18:23:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:17.583 18:23:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:17.583 18:23:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:17.583 18:23:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:17.583 18:23:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:17.583 18:23:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:08:17.583 18:23:02 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 2733290 00:08:17.583 18:23:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 2733290 ']' 00:08:17.583 18:23:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 2733290 00:08:17.583 18:23:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:08:17.583 18:23:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:17.583 18:23:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2733290 00:08:17.583 18:23:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:17.583 18:23:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:17.583 18:23:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2733290' 00:08:17.583 killing process with pid 2733290 00:08:17.583 18:23:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@967 -- # kill 2733290 00:08:17.583 18:23:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@972 -- # wait 2733290 00:08:18.151 18:23:03 blockdev_general.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:08:18.151 00:08:18.151 real 0m26.090s 00:08:18.151 user 0m35.346s 00:08:18.151 sys 0m11.446s 00:08:18.151 18:23:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.151 18:23:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:18.151 ************************************ 00:08:18.151 END TEST bdev_nbd 00:08:18.151 ************************************ 00:08:18.151 18:23:03 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:08:18.151 18:23:03 blockdev_general -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:08:18.151 18:23:03 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = nvme ']' 00:08:18.151 18:23:03 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = gpt ']' 00:08:18.151 18:23:03 blockdev_general -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:08:18.151 18:23:03 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:18.151 18:23:03 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.151 18:23:03 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:18.151 ************************************ 00:08:18.151 START TEST bdev_fio 00:08:18.151 ************************************ 00:08:18.151 18:23:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:08:18.151 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:08:18.151 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:08:18.151 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:08:18.151 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:08:18.151 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:08:18.151 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:08:18.151 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:08:18.151 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:08:18.151 18:23:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:18.151 18:23:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:08:18.151 18:23:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:08:18.151 18:23:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:08:18.151 18:23:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc0]' 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc0 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p0]' 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p0 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p1]' 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p1 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p0]' 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p0 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p1]' 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p1 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p2]' 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p2 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p3]' 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p3 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p4]' 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p4 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p5]' 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p5 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p6]' 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p6 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p7]' 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p7 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_TestPT]' 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=TestPT 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid0]' 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid0 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_concat0]' 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=concat0 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid1]' 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid1 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_AIO0]' 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=AIO0 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.152 18:23:03 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:08:18.152 ************************************ 00:08:18.152 START TEST bdev_fio_rw_verify 00:08:18.152 ************************************ 00:08:18.152 18:23:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:18.152 18:23:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:18.152 18:23:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:18.152 18:23:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:18.152 18:23:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:18.152 18:23:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:18.152 18:23:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:08:18.152 18:23:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:18.152 18:23:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:18.152 18:23:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:18.152 18:23:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:08:18.152 18:23:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:18.152 18:23:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:08:18.152 18:23:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:08:18.152 18:23:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:18.438 18:23:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:18.438 18:23:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:08:18.438 18:23:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:18.438 18:23:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:08:18.438 18:23:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:08:18.438 18:23:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:08:18.438 18:23:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:18.703 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:18.703 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:18.703 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:18.703 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:18.703 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:18.703 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:18.703 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:18.703 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:18.703 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:18.703 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:18.703 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:18.703 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:18.703 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:18.703 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:18.703 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:18.703 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:18.703 fio-3.35 00:08:18.703 Starting 16 threads 00:08:30.910 00:08:30.910 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=2738311: Mon Jul 15 18:23:15 2024 00:08:30.910 read: IOPS=80.8k, BW=316MiB/s (331MB/s)(3156MiB/10001msec) 00:08:30.910 slat (usec): min=3, max=1396, avg=39.41, stdev=14.61 00:08:30.910 clat (usec): min=13, max=1828, avg=320.66, stdev=136.24 00:08:30.910 lat (usec): min=23, max=1887, avg=360.07, stdev=143.49 00:08:30.910 clat percentiles (usec): 00:08:30.910 | 50.000th=[ 314], 99.000th=[ 619], 99.900th=[ 766], 99.990th=[ 971], 00:08:30.910 | 99.999th=[ 1237] 00:08:30.910 write: IOPS=127k, BW=497MiB/s (521MB/s)(4902MiB/9867msec); 0 zone resets 00:08:30.910 slat (usec): min=8, max=3652, avg=54.62, stdev=15.53 00:08:30.910 clat (usec): min=15, max=4216, avg=381.47, stdev=165.30 00:08:30.910 lat (usec): min=37, max=4280, avg=436.09, stdev=172.43 00:08:30.910 clat percentiles (usec): 00:08:30.910 | 50.000th=[ 367], 99.000th=[ 840], 99.900th=[ 996], 99.990th=[ 1074], 00:08:30.910 | 99.999th=[ 1254] 00:08:30.910 bw ( KiB/s): min=427320, max=612332, per=98.94%, avg=503347.89, stdev=3616.95, samples=304 00:08:30.910 iops : min=106828, max=153081, avg=125835.95, stdev=904.23, samples=304 00:08:30.910 lat (usec) : 20=0.01%, 50=0.19%, 100=2.35%, 250=25.87%, 500=52.47% 00:08:30.910 lat (usec) : 750=17.88%, 1000=1.18% 00:08:30.910 lat (msec) : 2=0.06%, 10=0.01% 00:08:30.910 cpu : usr=99.30%, sys=0.29%, ctx=615, majf=0, minf=2814 00:08:30.910 IO depths : 1=12.5%, 2=24.9%, 4=50.1%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:30.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:30.910 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:30.910 issued rwts: total=807947,1254972,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:30.910 latency : target=0, window=0, percentile=100.00%, depth=8 00:08:30.910 00:08:30.910 Run status group 0 (all jobs): 00:08:30.910 READ: bw=316MiB/s (331MB/s), 316MiB/s-316MiB/s (331MB/s-331MB/s), io=3156MiB (3309MB), run=10001-10001msec 00:08:30.910 WRITE: bw=497MiB/s (521MB/s), 497MiB/s-497MiB/s (521MB/s-521MB/s), io=4902MiB (5140MB), run=9867-9867msec 00:08:30.910 00:08:30.910 real 0m12.125s 00:08:30.910 user 2m49.050s 00:08:30.910 sys 0m1.178s 00:08:30.910 18:23:15 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:30.910 18:23:15 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:08:30.910 ************************************ 00:08:30.910 END TEST bdev_fio_rw_verify 00:08:30.910 ************************************ 00:08:30.910 18:23:15 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:08:30.910 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:08:30.910 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:30.910 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:08:30.910 18:23:15 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:30.910 18:23:15 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:08:30.910 18:23:15 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:08:30.910 18:23:15 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:08:30.910 18:23:15 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:08:30.910 18:23:15 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:08:30.910 18:23:15 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:08:30.910 18:23:15 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:08:30.910 18:23:15 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:30.910 18:23:15 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:08:30.910 18:23:15 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:08:30.910 18:23:15 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:08:30.910 18:23:15 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:08:30.910 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:08:30.912 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "839d2af1-a043-4a98-a486-f95d7b2ea68b"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "839d2af1-a043-4a98-a486-f95d7b2ea68b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "4075c235-5c0a-5937-8927-2554a3d497db"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "4075c235-5c0a-5937-8927-2554a3d497db",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "5178f687-80d5-5c76-b1a9-3cc18fad4ca3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "5178f687-80d5-5c76-b1a9-3cc18fad4ca3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "90037c3c-0be4-5146-adff-dca7c5787a7e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "90037c3c-0be4-5146-adff-dca7c5787a7e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "cce9ebbd-16c8-53bd-9373-2154283331af"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cce9ebbd-16c8-53bd-9373-2154283331af",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "7218666c-f212-51cb-b5f8-8119370a7cd9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7218666c-f212-51cb-b5f8-8119370a7cd9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "4b86e520-f274-5228-8eb3-a9610ed248d2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4b86e520-f274-5228-8eb3-a9610ed248d2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "b00228af-96d0-5837-8eb3-d025f581157b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b00228af-96d0-5837-8eb3-d025f581157b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "89b78de7-5e33-55ce-8ae2-7bb2bc081bfa"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "89b78de7-5e33-55ce-8ae2-7bb2bc081bfa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "7f83f4e4-7da2-575d-b308-4bb3845000b8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7f83f4e4-7da2-575d-b308-4bb3845000b8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "bcca6c30-5f81-5bd4-bbb7-09ede0c1f99a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bcca6c30-5f81-5bd4-bbb7-09ede0c1f99a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "628ca2ba-4822-5ab6-b56a-06c0460fbfbd"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "628ca2ba-4822-5ab6-b56a-06c0460fbfbd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "52260f93-2034-427a-af04-989a6745ed4d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "52260f93-2034-427a-af04-989a6745ed4d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "52260f93-2034-427a-af04-989a6745ed4d",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "cd1a18e8-e6cc-4250-b38c-03fd1aee0dc9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "ab86c264-a1ad-40f8-9bcd-c801729f8847",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "c73e9a77-5fac-4cf8-9d7a-fde5eb6b1209"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "c73e9a77-5fac-4cf8-9d7a-fde5eb6b1209",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c73e9a77-5fac-4cf8-9d7a-fde5eb6b1209",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "dd2fc882-987f-4efd-b7e2-85df35e81baa",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "ca11710c-6042-4bca-ade1-3f9eb9d394c2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "fc29308f-8e27-4d7a-b3fa-d47c10802428"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "fc29308f-8e27-4d7a-b3fa-d47c10802428",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "fc29308f-8e27-4d7a-b3fa-d47c10802428",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "3f6ac55e-bcad-4a77-8770-b3c86976bd63",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "9e090615-de9d-42a6-b5f6-dc540830e7c6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "cb18e894-fbf2-42cd-a33f-194ba78fc327"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "cb18e894-fbf2-42cd-a33f-194ba78fc327",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:08:30.912 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n Malloc0 00:08:30.912 Malloc1p0 00:08:30.912 Malloc1p1 00:08:30.912 Malloc2p0 00:08:30.912 Malloc2p1 00:08:30.912 Malloc2p2 00:08:30.912 Malloc2p3 00:08:30.912 Malloc2p4 00:08:30.912 Malloc2p5 00:08:30.912 Malloc2p6 00:08:30.912 Malloc2p7 00:08:30.912 TestPT 00:08:30.912 raid0 00:08:30.912 concat0 ]] 00:08:30.912 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "839d2af1-a043-4a98-a486-f95d7b2ea68b"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "839d2af1-a043-4a98-a486-f95d7b2ea68b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "4075c235-5c0a-5937-8927-2554a3d497db"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "4075c235-5c0a-5937-8927-2554a3d497db",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "5178f687-80d5-5c76-b1a9-3cc18fad4ca3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "5178f687-80d5-5c76-b1a9-3cc18fad4ca3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "90037c3c-0be4-5146-adff-dca7c5787a7e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "90037c3c-0be4-5146-adff-dca7c5787a7e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "cce9ebbd-16c8-53bd-9373-2154283331af"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cce9ebbd-16c8-53bd-9373-2154283331af",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "7218666c-f212-51cb-b5f8-8119370a7cd9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7218666c-f212-51cb-b5f8-8119370a7cd9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "4b86e520-f274-5228-8eb3-a9610ed248d2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4b86e520-f274-5228-8eb3-a9610ed248d2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "b00228af-96d0-5837-8eb3-d025f581157b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b00228af-96d0-5837-8eb3-d025f581157b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "89b78de7-5e33-55ce-8ae2-7bb2bc081bfa"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "89b78de7-5e33-55ce-8ae2-7bb2bc081bfa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "7f83f4e4-7da2-575d-b308-4bb3845000b8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7f83f4e4-7da2-575d-b308-4bb3845000b8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "bcca6c30-5f81-5bd4-bbb7-09ede0c1f99a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bcca6c30-5f81-5bd4-bbb7-09ede0c1f99a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "628ca2ba-4822-5ab6-b56a-06c0460fbfbd"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "628ca2ba-4822-5ab6-b56a-06c0460fbfbd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "52260f93-2034-427a-af04-989a6745ed4d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "52260f93-2034-427a-af04-989a6745ed4d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "52260f93-2034-427a-af04-989a6745ed4d",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "cd1a18e8-e6cc-4250-b38c-03fd1aee0dc9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "ab86c264-a1ad-40f8-9bcd-c801729f8847",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "c73e9a77-5fac-4cf8-9d7a-fde5eb6b1209"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "c73e9a77-5fac-4cf8-9d7a-fde5eb6b1209",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c73e9a77-5fac-4cf8-9d7a-fde5eb6b1209",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "dd2fc882-987f-4efd-b7e2-85df35e81baa",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "ca11710c-6042-4bca-ade1-3f9eb9d394c2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "fc29308f-8e27-4d7a-b3fa-d47c10802428"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "fc29308f-8e27-4d7a-b3fa-d47c10802428",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "fc29308f-8e27-4d7a-b3fa-d47c10802428",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "3f6ac55e-bcad-4a77-8770-b3c86976bd63",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "9e090615-de9d-42a6-b5f6-dc540830e7c6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "cb18e894-fbf2-42cd-a33f-194ba78fc327"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "cb18e894-fbf2-42cd-a33f-194ba78fc327",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc0]' 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc0 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p0]' 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p0 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p1]' 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p1 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p0]' 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p0 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p1]' 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p1 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p2]' 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p2 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p3]' 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p3 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p4]' 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p4 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p5]' 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p5 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p6]' 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p6 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p7]' 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p7 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_TestPT]' 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=TestPT 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_raid0]' 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=raid0 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_concat0]' 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=concat0 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.913 18:23:15 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:08:30.913 ************************************ 00:08:30.913 START TEST bdev_fio_trim 00:08:30.913 ************************************ 00:08:30.913 18:23:15 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:30.913 18:23:15 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:30.913 18:23:15 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:30.914 18:23:15 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:30.914 18:23:15 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:30.914 18:23:15 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:30.914 18:23:15 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:08:30.914 18:23:15 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:30.914 18:23:15 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:30.914 18:23:15 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:30.914 18:23:15 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:08:30.914 18:23:15 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:30.914 18:23:16 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:08:30.914 18:23:16 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:08:30.914 18:23:16 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:30.914 18:23:16 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:30.914 18:23:16 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:30.914 18:23:16 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:08:30.914 18:23:16 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:08:30.914 18:23:16 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:08:30.914 18:23:16 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:08:30.914 18:23:16 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:30.914 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:30.914 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:30.914 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:30.914 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:30.914 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:30.914 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:30.914 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:30.914 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:30.914 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:30.914 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:30.914 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:30.914 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:30.914 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:30.914 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:30.914 fio-3.35 00:08:30.914 Starting 14 threads 00:08:43.126 00:08:43.126 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=2740421: Mon Jul 15 18:23:27 2024 00:08:43.126 write: IOPS=115k, BW=449MiB/s (470MB/s)(4486MiB/10002msec); 0 zone resets 00:08:43.126 slat (usec): min=8, max=3502, avg=42.93, stdev=11.44 00:08:43.126 clat (usec): min=28, max=3926, avg=305.42, stdev=98.75 00:08:43.126 lat (usec): min=43, max=3975, avg=348.35, stdev=102.13 00:08:43.126 clat percentiles (usec): 00:08:43.126 | 50.000th=[ 297], 99.000th=[ 515], 99.900th=[ 578], 99.990th=[ 652], 00:08:43.126 | 99.999th=[ 775] 00:08:43.126 bw ( KiB/s): min=428889, max=474624, per=100.00%, avg=459859.37, stdev=1214.43, samples=266 00:08:43.126 iops : min=107222, max=118656, avg=114964.79, stdev=303.61, samples=266 00:08:43.126 trim: IOPS=115k, BW=449MiB/s (470MB/s)(4486MiB/10002msec); 0 zone resets 00:08:43.126 slat (usec): min=5, max=166, avg=28.84, stdev= 7.16 00:08:43.126 clat (usec): min=12, max=3975, avg=348.56, stdev=102.14 00:08:43.126 lat (usec): min=43, max=4011, avg=377.39, stdev=104.63 00:08:43.126 clat percentiles (usec): 00:08:43.126 | 50.000th=[ 343], 99.000th=[ 562], 99.900th=[ 635], 99.990th=[ 717], 00:08:43.126 | 99.999th=[ 857] 00:08:43.126 bw ( KiB/s): min=428889, max=474624, per=100.00%, avg=459859.37, stdev=1214.43, samples=266 00:08:43.126 iops : min=107222, max=118656, avg=114964.79, stdev=303.60, samples=266 00:08:43.126 lat (usec) : 20=0.01%, 50=0.01%, 100=0.31%, 250=25.83%, 500=69.14% 00:08:43.126 lat (usec) : 750=4.70%, 1000=0.01% 00:08:43.126 lat (msec) : 2=0.01%, 4=0.01% 00:08:43.126 cpu : usr=99.62%, sys=0.00%, ctx=643, majf=0, minf=875 00:08:43.126 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:43.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.126 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.126 issued rwts: total=0,1148445,1148449,0 short=0,0,0,0 dropped=0,0,0,0 00:08:43.126 latency : target=0, window=0, percentile=100.00%, depth=8 00:08:43.126 00:08:43.126 Run status group 0 (all jobs): 00:08:43.126 WRITE: bw=449MiB/s (470MB/s), 449MiB/s-449MiB/s (470MB/s-470MB/s), io=4486MiB (4704MB), run=10002-10002msec 00:08:43.126 TRIM: bw=449MiB/s (470MB/s), 449MiB/s-449MiB/s (470MB/s-470MB/s), io=4486MiB (4704MB), run=10002-10002msec 00:08:43.126 00:08:43.126 real 0m11.782s 00:08:43.126 user 2m28.999s 00:08:43.126 sys 0m0.751s 00:08:43.126 18:23:27 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:43.126 18:23:27 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:08:43.126 ************************************ 00:08:43.126 END TEST bdev_fio_trim 00:08:43.126 ************************************ 00:08:43.126 18:23:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:08:43.126 18:23:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:08:43.126 18:23:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:43.126 18:23:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:08:43.126 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:08:43.126 18:23:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:08:43.126 00:08:43.126 real 0m24.243s 00:08:43.126 user 5m18.249s 00:08:43.126 sys 0m2.090s 00:08:43.126 18:23:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:43.126 18:23:27 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:08:43.126 ************************************ 00:08:43.126 END TEST bdev_fio 00:08:43.126 ************************************ 00:08:43.126 18:23:27 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:08:43.126 18:23:27 blockdev_general -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:43.126 18:23:27 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:43.127 18:23:27 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:08:43.127 18:23:27 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.127 18:23:27 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:43.127 ************************************ 00:08:43.127 START TEST bdev_verify 00:08:43.127 ************************************ 00:08:43.127 18:23:27 blockdev_general.bdev_verify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:43.127 [2024-07-15 18:23:27.958924] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:08:43.127 [2024-07-15 18:23:27.959043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2742196 ] 00:08:43.127 [2024-07-15 18:23:28.099077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:43.127 [2024-07-15 18:23:28.198575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.127 [2024-07-15 18:23:28.198579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.127 [2024-07-15 18:23:28.346699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:43.127 [2024-07-15 18:23:28.346755] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:43.127 [2024-07-15 18:23:28.346774] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:43.127 [2024-07-15 18:23:28.354710] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:43.127 [2024-07-15 18:23:28.354742] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:43.127 [2024-07-15 18:23:28.362725] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:43.127 [2024-07-15 18:23:28.362750] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:43.127 [2024-07-15 18:23:28.435354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:43.127 [2024-07-15 18:23:28.435403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.127 [2024-07-15 18:23:28.435417] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f7dc00 00:08:43.127 [2024-07-15 18:23:28.435427] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.127 [2024-07-15 18:23:28.436975] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.127 [2024-07-15 18:23:28.437004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:08:43.127 Running I/O for 5 seconds... 00:08:49.695 00:08:49.695 Latency(us) 00:08:49.695 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.695 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:49.695 Verification LBA range: start 0x0 length 0x1000 00:08:49.695 Malloc0 : 5.19 1086.15 4.24 0.00 0.00 117608.72 573.44 381481.94 00:08:49.695 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:49.695 Verification LBA range: start 0x1000 length 0x1000 00:08:49.695 Malloc0 : 5.21 810.91 3.17 0.00 0.00 157468.21 663.16 481346.32 00:08:49.695 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:49.695 Verification LBA range: start 0x0 length 0x800 00:08:49.695 Malloc1p0 : 5.19 567.51 2.22 0.00 0.00 224515.21 2683.86 194735.54 00:08:49.695 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:49.695 Verification LBA range: start 0x800 length 0x800 00:08:49.696 Malloc1p0 : 5.27 437.34 1.71 0.00 0.00 291037.72 3510.86 267636.54 00:08:49.696 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:49.696 Verification LBA range: start 0x0 length 0x800 00:08:49.696 Malloc1p1 : 5.19 567.26 2.22 0.00 0.00 224063.23 2683.86 194735.54 00:08:49.696 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:49.696 Verification LBA range: start 0x800 length 0x800 00:08:49.696 Malloc1p1 : 5.27 437.11 1.71 0.00 0.00 290369.54 4369.07 265639.25 00:08:49.696 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:49.696 Verification LBA range: start 0x0 length 0x200 00:08:49.696 Malloc2p0 : 5.19 567.02 2.21 0.00 0.00 223646.31 3682.50 190740.97 00:08:49.696 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:49.696 Verification LBA range: start 0x200 length 0x200 00:08:49.696 Malloc2p0 : 5.27 436.88 1.71 0.00 0.00 289433.19 3994.58 261644.68 00:08:49.696 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:49.696 Verification LBA range: start 0x0 length 0x200 00:08:49.696 Malloc2p1 : 5.19 566.77 2.21 0.00 0.00 223110.82 3261.20 185747.75 00:08:49.696 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:49.696 Verification LBA range: start 0x200 length 0x200 00:08:49.696 Malloc2p1 : 5.28 436.66 1.71 0.00 0.00 288610.35 3542.06 259647.39 00:08:49.696 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:49.696 Verification LBA range: start 0x0 length 0x200 00:08:49.696 Malloc2p2 : 5.20 566.52 2.21 0.00 0.00 222628.26 2699.46 183750.46 00:08:49.696 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:49.696 Verification LBA range: start 0x200 length 0x200 00:08:49.696 Malloc2p2 : 5.28 436.43 1.70 0.00 0.00 287924.82 4306.65 257650.10 00:08:49.696 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:49.696 Verification LBA range: start 0x0 length 0x200 00:08:49.696 Malloc2p3 : 5.20 566.28 2.21 0.00 0.00 222189.87 2683.86 186746.39 00:08:49.696 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:49.696 Verification LBA range: start 0x200 length 0x200 00:08:49.696 Malloc2p3 : 5.28 436.20 1.70 0.00 0.00 287029.98 4088.20 255652.82 00:08:49.696 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:49.696 Verification LBA range: start 0x0 length 0x200 00:08:49.696 Malloc2p4 : 5.20 566.03 2.21 0.00 0.00 221764.16 3620.08 184749.10 00:08:49.696 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:49.696 Verification LBA range: start 0x200 length 0x200 00:08:49.696 Malloc2p4 : 5.28 435.97 1.70 0.00 0.00 286184.72 3542.06 254654.17 00:08:49.696 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:49.696 Verification LBA range: start 0x0 length 0x200 00:08:49.696 Malloc2p5 : 5.20 565.79 2.21 0.00 0.00 221245.74 3339.22 181753.17 00:08:49.696 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:49.696 Verification LBA range: start 0x200 length 0x200 00:08:49.696 Malloc2p5 : 5.29 435.73 1.70 0.00 0.00 285513.10 3198.78 247663.66 00:08:49.696 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:49.696 Verification LBA range: start 0x0 length 0x200 00:08:49.696 Malloc2p6 : 5.21 565.55 2.21 0.00 0.00 220751.12 2699.46 180754.53 00:08:49.696 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:49.696 Verification LBA range: start 0x200 length 0x200 00:08:49.696 Malloc2p6 : 5.29 435.50 1.70 0.00 0.00 284797.09 4244.24 238675.87 00:08:49.696 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:49.696 Verification LBA range: start 0x0 length 0x200 00:08:49.696 Malloc2p7 : 5.21 565.26 2.21 0.00 0.00 220322.22 2683.86 182751.82 00:08:49.696 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:49.696 Verification LBA range: start 0x200 length 0x200 00:08:49.696 Malloc2p7 : 5.29 435.27 1.70 0.00 0.00 283920.44 4181.82 229688.08 00:08:49.696 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:49.696 Verification LBA range: start 0x0 length 0x1000 00:08:49.696 TestPT : 5.23 562.69 2.20 0.00 0.00 220644.35 11921.31 181753.17 00:08:49.696 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:49.696 Verification LBA range: start 0x1000 length 0x1000 00:08:49.696 TestPT : 5.31 433.56 1.69 0.00 0.00 283995.29 16103.13 230686.72 00:08:49.696 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:49.696 Verification LBA range: start 0x0 length 0x2000 00:08:49.696 raid0 : 5.21 564.64 2.21 0.00 0.00 219410.08 3729.31 171766.74 00:08:49.696 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:49.696 Verification LBA range: start 0x2000 length 0x2000 00:08:49.696 raid0 : 5.30 434.75 1.70 0.00 0.00 282246.62 3822.93 209715.20 00:08:49.696 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:49.696 Verification LBA range: start 0x0 length 0x2000 00:08:49.696 concat0 : 5.22 564.37 2.20 0.00 0.00 218875.85 3495.25 167772.16 00:08:49.696 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:49.696 Verification LBA range: start 0x2000 length 0x2000 00:08:49.696 concat0 : 5.30 434.52 1.70 0.00 0.00 281382.11 3791.73 213709.78 00:08:49.696 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:49.696 Verification LBA range: start 0x0 length 0x1000 00:08:49.696 raid1 : 5.24 586.17 2.29 0.00 0.00 210145.42 3105.16 173764.02 00:08:49.696 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:49.696 Verification LBA range: start 0x1000 length 0x1000 00:08:49.696 raid1 : 5.31 434.28 1.70 0.00 0.00 280452.63 4618.73 219701.64 00:08:49.696 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:49.696 Verification LBA range: start 0x0 length 0x4e2 00:08:49.696 AIO0 : 5.24 585.97 2.29 0.00 0.00 209711.50 1224.90 183750.46 00:08:49.696 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:49.696 Verification LBA range: start 0x4e2 length 0x4e2 00:08:49.696 AIO0 : 5.32 457.27 1.79 0.00 0.00 265283.26 1279.51 224694.86 00:08:49.696 =================================================================================================================== 00:08:49.696 Total : 16982.35 66.34 0.00 0.00 235759.98 573.44 481346.32 00:08:49.696 00:08:49.696 real 0m6.538s 00:08:49.696 user 0m12.095s 00:08:49.696 sys 0m0.365s 00:08:49.696 18:23:34 blockdev_general.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:49.696 18:23:34 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:49.696 ************************************ 00:08:49.696 END TEST bdev_verify 00:08:49.696 ************************************ 00:08:49.696 18:23:34 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:08:49.696 18:23:34 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:49.696 18:23:34 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:08:49.696 18:23:34 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.696 18:23:34 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:49.696 ************************************ 00:08:49.696 START TEST bdev_verify_big_io 00:08:49.696 ************************************ 00:08:49.696 18:23:34 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:49.696 [2024-07-15 18:23:34.533513] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:08:49.696 [2024-07-15 18:23:34.533569] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2743288 ] 00:08:49.696 [2024-07-15 18:23:34.633763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:49.696 [2024-07-15 18:23:34.727151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.696 [2024-07-15 18:23:34.727156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.696 [2024-07-15 18:23:34.875572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:49.696 [2024-07-15 18:23:34.875624] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:49.696 [2024-07-15 18:23:34.875637] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:49.696 [2024-07-15 18:23:34.883582] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:49.696 [2024-07-15 18:23:34.883611] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:49.696 [2024-07-15 18:23:34.891595] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:49.696 [2024-07-15 18:23:34.891621] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:49.696 [2024-07-15 18:23:34.963418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:49.696 [2024-07-15 18:23:34.963467] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.696 [2024-07-15 18:23:34.963487] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21ffc00 00:08:49.696 [2024-07-15 18:23:34.963497] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.696 [2024-07-15 18:23:34.965043] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.696 [2024-07-15 18:23:34.965069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:08:49.697 [2024-07-15 18:23:35.142656] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:08:49.697 [2024-07-15 18:23:35.143933] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:08:49.697 [2024-07-15 18:23:35.145799] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:08:49.697 [2024-07-15 18:23:35.147077] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:08:49.697 [2024-07-15 18:23:35.148962] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:08:49.697 [2024-07-15 18:23:35.150288] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:08:49.697 [2024-07-15 18:23:35.151791] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:08:49.697 [2024-07-15 18:23:35.153261] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:08:49.697 [2024-07-15 18:23:35.154214] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:08:49.697 [2024-07-15 18:23:35.155636] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:08:49.697 [2024-07-15 18:23:35.156571] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:08:49.697 [2024-07-15 18:23:35.158027] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:08:49.697 [2024-07-15 18:23:35.158992] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:08:49.697 [2024-07-15 18:23:35.160439] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:08:49.697 [2024-07-15 18:23:35.161294] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:08:49.697 [2024-07-15 18:23:35.162570] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:08:49.697 [2024-07-15 18:23:35.183762] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:08:49.697 [2024-07-15 18:23:35.185551] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:08:49.697 Running I/O for 5 seconds... 00:08:57.858 00:08:57.858 Latency(us) 00:08:57.858 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.858 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:57.858 Verification LBA range: start 0x0 length 0x100 00:08:57.858 Malloc0 : 5.79 154.67 9.67 0.00 0.00 809760.55 940.13 2109135.73 00:08:57.858 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:57.858 Verification LBA range: start 0x100 length 0x100 00:08:57.858 Malloc0 : 7.01 109.48 6.84 0.00 0.00 964595.03 1248.30 1981309.32 00:08:57.858 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:57.858 Verification LBA range: start 0x0 length 0x80 00:08:57.858 Malloc1p0 : 6.77 33.07 2.07 0.00 0.00 3482291.36 1560.38 5752188.34 00:08:57.858 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:57.858 Verification LBA range: start 0x80 length 0x80 00:08:57.858 Malloc1p0 : 7.44 27.95 1.75 0.00 0.00 3533069.65 1997.29 5624361.94 00:08:57.858 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:57.858 Verification LBA range: start 0x0 length 0x80 00:08:57.858 Malloc1p1 : 6.89 34.85 2.18 0.00 0.00 3204964.27 1560.38 5528492.13 00:08:57.858 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:57.858 Verification LBA range: start 0x80 length 0x80 00:08:57.858 Malloc1p1 : 7.44 27.94 1.75 0.00 0.00 3349089.19 1997.29 5336752.52 00:08:57.858 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:57.858 Verification LBA range: start 0x0 length 0x20 00:08:57.858 Malloc2p0 : 6.35 22.67 1.42 0.00 0.00 1238827.01 667.06 2316853.64 00:08:57.858 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:57.858 Verification LBA range: start 0x20 length 0x20 00:08:57.858 Malloc2p0 : 7.27 17.60 1.10 0.00 0.00 1308171.96 803.60 2300875.34 00:08:57.858 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:57.858 Verification LBA range: start 0x0 length 0x20 00:08:57.858 Malloc2p1 : 6.35 22.67 1.42 0.00 0.00 1226115.13 647.56 2284897.04 00:08:57.858 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:57.858 Verification LBA range: start 0x20 length 0x20 00:08:57.858 Malloc2p1 : 7.28 17.59 1.10 0.00 0.00 1291004.57 799.70 2268918.74 00:08:57.858 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:57.858 Verification LBA range: start 0x0 length 0x20 00:08:57.858 Malloc2p2 : 6.35 22.67 1.42 0.00 0.00 1213867.11 655.36 2252940.43 00:08:57.858 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:57.858 Verification LBA range: start 0x20 length 0x20 00:08:57.858 Malloc2p2 : 7.35 19.60 1.22 0.00 0.00 1152350.18 811.40 2236962.13 00:08:57.858 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:57.858 Verification LBA range: start 0x0 length 0x20 00:08:57.858 Malloc2p3 : 6.35 22.66 1.42 0.00 0.00 1201966.28 655.36 2220983.83 00:08:57.858 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:57.858 Verification LBA range: start 0x20 length 0x20 00:08:57.858 Malloc2p3 : 7.35 19.59 1.22 0.00 0.00 1137257.12 803.60 2205005.53 00:08:57.858 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:57.858 Verification LBA range: start 0x0 length 0x20 00:08:57.858 Malloc2p4 : 6.36 22.66 1.42 0.00 0.00 1190973.53 643.66 2205005.53 00:08:57.858 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:57.858 Verification LBA range: start 0x20 length 0x20 00:08:57.858 Malloc2p4 : 7.45 21.49 1.34 0.00 0.00 1025264.75 811.40 2157070.63 00:08:57.858 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:57.858 Verification LBA range: start 0x0 length 0x20 00:08:57.858 Malloc2p5 : 6.36 22.65 1.42 0.00 0.00 1178337.67 639.76 2173048.93 00:08:57.858 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:57.858 Verification LBA range: start 0x20 length 0x20 00:08:57.858 Malloc2p5 : 7.50 23.48 1.47 0.00 0.00 932359.55 834.80 2125114.03 00:08:57.858 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:57.858 Verification LBA range: start 0x0 length 0x20 00:08:57.858 Malloc2p6 : 6.36 22.65 1.42 0.00 0.00 1166385.32 647.56 2157070.63 00:08:57.858 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:57.858 Verification LBA range: start 0x20 length 0x20 00:08:57.858 Malloc2p6 : 7.55 25.44 1.59 0.00 0.00 851212.06 823.10 2093157.42 00:08:57.858 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:57.858 Verification LBA range: start 0x0 length 0x20 00:08:57.858 Malloc2p7 : 6.45 24.79 1.55 0.00 0.00 1065338.21 631.95 2125114.03 00:08:57.858 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:57.858 Verification LBA range: start 0x20 length 0x20 00:08:57.858 Malloc2p7 : 7.55 25.43 1.59 0.00 0.00 839264.21 830.90 2045222.52 00:08:57.858 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:57.858 Verification LBA range: start 0x0 length 0x100 00:08:57.858 TestPT : 6.97 32.43 2.03 0.00 0.00 3087648.69 119337.94 3243595.09 00:08:57.858 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:57.858 Verification LBA range: start 0x100 length 0x100 00:08:57.858 TestPT : 7.62 84.01 5.25 0.00 0.00 986348.96 7427.41 3579139.41 00:08:57.858 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:57.858 Verification LBA range: start 0x0 length 0x200 00:08:57.858 raid0 : 6.78 40.14 2.51 0.00 0.00 2440695.18 1677.41 4729577.08 00:08:57.858 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:57.858 Verification LBA range: start 0x200 length 0x200 00:08:57.858 raid0 : 7.16 51.39 3.21 0.00 0.00 2281563.82 3276.80 4314141.26 00:08:57.858 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:57.858 Verification LBA range: start 0x0 length 0x200 00:08:57.858 concat0 : 7.01 45.63 2.85 0.00 0.00 2084204.29 1654.00 4537837.47 00:08:57.858 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:57.858 Verification LBA range: start 0x200 length 0x200 00:08:57.858 concat0 : 7.54 25.45 1.59 0.00 0.00 4499654.99 2122.12 6678929.80 00:08:57.858 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:57.858 Verification LBA range: start 0x0 length 0x100 00:08:57.859 raid1 : 6.97 52.78 3.30 0.00 0.00 1756736.32 2137.72 4346097.86 00:08:57.859 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:57.859 Verification LBA range: start 0x100 length 0x100 00:08:57.859 raid1 : 7.55 25.44 1.59 0.00 0.00 4305005.59 2777.48 6391320.38 00:08:57.859 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:08:57.859 Verification LBA range: start 0x0 length 0x4e 00:08:57.859 AIO0 : 7.08 62.46 3.90 0.00 0.00 883427.68 799.70 3051855.48 00:08:57.859 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:08:57.859 Verification LBA range: start 0x4e length 0x4e 00:08:57.859 AIO0 : 7.27 21.73 1.36 0.00 0.00 3055827.17 1006.45 5336752.52 00:08:57.859 =================================================================================================================== 00:08:57.859 Total : 1183.04 73.94 0.00 0.00 1699089.13 631.95 6678929.80 00:08:57.859 00:08:57.859 real 0m8.814s 00:08:57.859 user 0m16.743s 00:08:57.859 sys 0m0.359s 00:08:57.859 18:23:43 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:57.859 18:23:43 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:57.859 ************************************ 00:08:57.859 END TEST bdev_verify_big_io 00:08:57.859 ************************************ 00:08:57.859 18:23:43 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:08:57.859 18:23:43 blockdev_general -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:57.859 18:23:43 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:57.859 18:23:43 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.859 18:23:43 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:57.859 ************************************ 00:08:57.859 START TEST bdev_write_zeroes 00:08:57.859 ************************************ 00:08:57.859 18:23:43 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:58.118 [2024-07-15 18:23:43.420237] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:08:58.118 [2024-07-15 18:23:43.420295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2744690 ] 00:08:58.118 [2024-07-15 18:23:43.521257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.118 [2024-07-15 18:23:43.614997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.376 [2024-07-15 18:23:43.762894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:58.376 [2024-07-15 18:23:43.762943] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:58.376 [2024-07-15 18:23:43.762961] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:58.376 [2024-07-15 18:23:43.770899] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:58.376 [2024-07-15 18:23:43.770925] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:58.376 [2024-07-15 18:23:43.778910] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:58.376 [2024-07-15 18:23:43.778933] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:58.376 [2024-07-15 18:23:43.850634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:58.376 [2024-07-15 18:23:43.850682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.376 [2024-07-15 18:23:43.850696] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17b8cc0 00:08:58.376 [2024-07-15 18:23:43.850712] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.376 [2024-07-15 18:23:43.852213] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.376 [2024-07-15 18:23:43.852241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:08:58.634 Running I/O for 1 seconds... 00:09:00.011 00:09:00.011 Latency(us) 00:09:00.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:00.011 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:00.011 Malloc0 : 1.05 4628.88 18.08 0.00 0.00 27630.05 674.86 45687.95 00:09:00.011 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:00.011 Malloc1p0 : 1.05 4621.72 18.05 0.00 0.00 27626.17 959.63 44938.97 00:09:00.011 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:00.011 Malloc1p1 : 1.05 4614.64 18.03 0.00 0.00 27602.89 955.73 43940.33 00:09:00.011 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:00.011 Malloc2p0 : 1.06 4607.54 18.00 0.00 0.00 27577.88 955.73 42941.68 00:09:00.011 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:00.011 Malloc2p1 : 1.06 4600.48 17.97 0.00 0.00 27549.98 951.83 41943.04 00:09:00.011 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:00.011 Malloc2p2 : 1.06 4593.44 17.94 0.00 0.00 27526.98 959.63 40944.40 00:09:00.011 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:00.011 Malloc2p3 : 1.06 4586.40 17.92 0.00 0.00 27510.32 955.73 40195.41 00:09:00.011 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:00.011 Malloc2p4 : 1.06 4579.45 17.89 0.00 0.00 27486.94 955.73 39196.77 00:09:00.011 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:00.011 Malloc2p5 : 1.06 4572.52 17.86 0.00 0.00 27462.03 951.83 38198.13 00:09:00.011 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:00.011 Malloc2p6 : 1.07 4565.53 17.83 0.00 0.00 27437.79 955.73 37199.48 00:09:00.011 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:00.011 Malloc2p7 : 1.07 4558.65 17.81 0.00 0.00 27416.62 955.73 36200.84 00:09:00.011 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:00.011 TestPT : 1.07 4551.75 17.78 0.00 0.00 27399.72 990.84 35451.86 00:09:00.011 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:00.011 raid0 : 1.07 4543.76 17.75 0.00 0.00 27369.46 1739.82 33704.23 00:09:00.011 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:00.011 concat0 : 1.07 4535.99 17.72 0.00 0.00 27302.19 1732.02 31831.77 00:09:00.011 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:00.011 raid1 : 1.07 4526.24 17.68 0.00 0.00 27233.32 2746.27 29085.50 00:09:00.011 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:00.011 AIO0 : 1.08 4520.33 17.66 0.00 0.00 27130.61 994.74 28211.69 00:09:00.011 =================================================================================================================== 00:09:00.011 Total : 73207.32 285.97 0.00 0.00 27453.93 674.86 45687.95 00:09:00.011 00:09:00.011 real 0m2.133s 00:09:00.011 user 0m1.782s 00:09:00.011 sys 0m0.289s 00:09:00.011 18:23:45 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:00.011 18:23:45 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:00.011 ************************************ 00:09:00.011 END TEST bdev_write_zeroes 00:09:00.011 ************************************ 00:09:00.011 18:23:45 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:09:00.011 18:23:45 blockdev_general -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:00.011 18:23:45 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:00.011 18:23:45 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:00.011 18:23:45 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:00.270 ************************************ 00:09:00.270 START TEST bdev_json_nonenclosed 00:09:00.270 ************************************ 00:09:00.270 18:23:45 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:00.270 [2024-07-15 18:23:45.620437] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:09:00.270 [2024-07-15 18:23:45.620488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2745058 ] 00:09:00.270 [2024-07-15 18:23:45.719371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.270 [2024-07-15 18:23:45.810632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.270 [2024-07-15 18:23:45.810693] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:00.270 [2024-07-15 18:23:45.810710] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:00.270 [2024-07-15 18:23:45.810720] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:00.528 00:09:00.528 real 0m0.339s 00:09:00.528 user 0m0.222s 00:09:00.528 sys 0m0.114s 00:09:00.528 18:23:45 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:09:00.528 18:23:45 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:00.528 18:23:45 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:00.528 ************************************ 00:09:00.528 END TEST bdev_json_nonenclosed 00:09:00.528 ************************************ 00:09:00.528 18:23:45 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:09:00.528 18:23:45 blockdev_general -- bdev/blockdev.sh@782 -- # true 00:09:00.528 18:23:45 blockdev_general -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:00.528 18:23:45 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:00.528 18:23:45 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:00.528 18:23:45 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:00.528 ************************************ 00:09:00.528 START TEST bdev_json_nonarray 00:09:00.528 ************************************ 00:09:00.528 18:23:45 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:00.528 [2024-07-15 18:23:46.032989] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:09:00.528 [2024-07-15 18:23:46.033043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2745099 ] 00:09:00.787 [2024-07-15 18:23:46.133136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.787 [2024-07-15 18:23:46.225177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.787 [2024-07-15 18:23:46.225250] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:00.787 [2024-07-15 18:23:46.225268] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:00.787 [2024-07-15 18:23:46.225277] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:00.787 00:09:00.787 real 0m0.342s 00:09:00.787 user 0m0.217s 00:09:00.787 sys 0m0.121s 00:09:00.787 18:23:46 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:09:00.787 18:23:46 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:00.787 18:23:46 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:00.787 ************************************ 00:09:00.787 END TEST bdev_json_nonarray 00:09:00.787 ************************************ 00:09:01.047 18:23:46 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:09:01.047 18:23:46 blockdev_general -- bdev/blockdev.sh@785 -- # true 00:09:01.047 18:23:46 blockdev_general -- bdev/blockdev.sh@787 -- # [[ bdev == bdev ]] 00:09:01.047 18:23:46 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qos qos_test_suite '' 00:09:01.047 18:23:46 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:01.047 18:23:46 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.047 18:23:46 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:01.047 ************************************ 00:09:01.047 START TEST bdev_qos 00:09:01.047 ************************************ 00:09:01.047 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@1123 -- # qos_test_suite '' 00:09:01.047 18:23:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # QOS_PID=2745486 00:09:01.047 18:23:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # echo 'Process qos testing pid: 2745486' 00:09:01.047 Process qos testing pid: 2745486 00:09:01.047 18:23:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:09:01.047 18:23:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:09:01.047 18:23:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@449 -- # waitforlisten 2745486 00:09:01.047 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@829 -- # '[' -z 2745486 ']' 00:09:01.047 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.047 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:01.047 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.047 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:01.047 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:01.047 [2024-07-15 18:23:46.418122] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:09:01.047 [2024-07-15 18:23:46.418181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2745486 ] 00:09:01.047 [2024-07-15 18:23:46.521826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.307 [2024-07-15 18:23:46.654044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@862 -- # return 0 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:01.567 Malloc_0 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # waitforbdev Malloc_0 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_0 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:01.567 [ 00:09:01.567 { 00:09:01.567 "name": "Malloc_0", 00:09:01.567 "aliases": [ 00:09:01.567 "e37b90f0-a9ec-4a25-89d0-960c04c97b58" 00:09:01.567 ], 00:09:01.567 "product_name": "Malloc disk", 00:09:01.567 "block_size": 512, 00:09:01.567 "num_blocks": 262144, 00:09:01.567 "uuid": "e37b90f0-a9ec-4a25-89d0-960c04c97b58", 00:09:01.567 "assigned_rate_limits": { 00:09:01.567 "rw_ios_per_sec": 0, 00:09:01.567 "rw_mbytes_per_sec": 0, 00:09:01.567 "r_mbytes_per_sec": 0, 00:09:01.567 "w_mbytes_per_sec": 0 00:09:01.567 }, 00:09:01.567 "claimed": false, 00:09:01.567 "zoned": false, 00:09:01.567 "supported_io_types": { 00:09:01.567 "read": true, 00:09:01.567 "write": true, 00:09:01.567 "unmap": true, 00:09:01.567 "flush": true, 00:09:01.567 "reset": true, 00:09:01.567 "nvme_admin": false, 00:09:01.567 "nvme_io": false, 00:09:01.567 "nvme_io_md": false, 00:09:01.567 "write_zeroes": true, 00:09:01.567 "zcopy": true, 00:09:01.567 "get_zone_info": false, 00:09:01.567 "zone_management": false, 00:09:01.567 "zone_append": false, 00:09:01.567 "compare": false, 00:09:01.567 "compare_and_write": false, 00:09:01.567 "abort": true, 00:09:01.567 "seek_hole": false, 00:09:01.567 "seek_data": false, 00:09:01.567 "copy": true, 00:09:01.567 "nvme_iov_md": false 00:09:01.567 }, 00:09:01.567 "memory_domains": [ 00:09:01.567 { 00:09:01.567 "dma_device_id": "system", 00:09:01.567 "dma_device_type": 1 00:09:01.567 }, 00:09:01.567 { 00:09:01.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.567 "dma_device_type": 2 00:09:01.567 } 00:09:01.567 ], 00:09:01.567 "driver_specific": {} 00:09:01.567 } 00:09:01.567 ] 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # rpc_cmd bdev_null_create Null_1 128 512 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:01.567 Null_1 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@454 -- # waitforbdev Null_1 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Null_1 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.567 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:01.567 [ 00:09:01.567 { 00:09:01.567 "name": "Null_1", 00:09:01.567 "aliases": [ 00:09:01.567 "ff2910fc-67b1-4f37-90da-bc52778f6845" 00:09:01.567 ], 00:09:01.567 "product_name": "Null disk", 00:09:01.567 "block_size": 512, 00:09:01.567 "num_blocks": 262144, 00:09:01.567 "uuid": "ff2910fc-67b1-4f37-90da-bc52778f6845", 00:09:01.567 "assigned_rate_limits": { 00:09:01.568 "rw_ios_per_sec": 0, 00:09:01.568 "rw_mbytes_per_sec": 0, 00:09:01.568 "r_mbytes_per_sec": 0, 00:09:01.568 "w_mbytes_per_sec": 0 00:09:01.568 }, 00:09:01.568 "claimed": false, 00:09:01.568 "zoned": false, 00:09:01.568 "supported_io_types": { 00:09:01.568 "read": true, 00:09:01.568 "write": true, 00:09:01.568 "unmap": false, 00:09:01.568 "flush": false, 00:09:01.568 "reset": true, 00:09:01.568 "nvme_admin": false, 00:09:01.568 "nvme_io": false, 00:09:01.568 "nvme_io_md": false, 00:09:01.568 "write_zeroes": true, 00:09:01.568 "zcopy": false, 00:09:01.568 "get_zone_info": false, 00:09:01.568 "zone_management": false, 00:09:01.568 "zone_append": false, 00:09:01.568 "compare": false, 00:09:01.568 "compare_and_write": false, 00:09:01.568 "abort": true, 00:09:01.568 "seek_hole": false, 00:09:01.568 "seek_data": false, 00:09:01.568 "copy": false, 00:09:01.568 "nvme_iov_md": false 00:09:01.568 }, 00:09:01.568 "driver_specific": {} 00:09:01.568 } 00:09:01.568 ] 00:09:01.568 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.568 18:23:46 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:09:01.568 18:23:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@457 -- # qos_function_test 00:09:01.568 18:23:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_iops_limit=1000 00:09:01.568 18:23:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:01.568 18:23:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local qos_lower_bw_limit=2 00:09:01.568 18:23:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local io_result=0 00:09:01.568 18:23:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local iops_limit=0 00:09:01.568 18:23:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@414 -- # local bw_limit=0 00:09:01.568 18:23:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # get_io_result IOPS Malloc_0 00:09:01.568 18:23:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:09:01.568 18:23:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:09:01.568 18:23:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:09:01.568 18:23:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:09:01.568 18:23:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:09:01.568 18:23:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:09:01.568 Running I/O for 60 seconds... 00:09:06.841 18:23:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 43682.01 174728.06 0.00 0.00 176128.00 0.00 0.00 ' 00:09:06.841 18:23:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:09:06.841 18:23:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:09:06.841 18:23:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # iostat_result=43682.01 00:09:06.841 18:23:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 43682 00:09:06.841 18:23:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # io_result=43682 00:09:06.841 18:23:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # iops_limit=10000 00:09:06.841 18:23:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@419 -- # '[' 10000 -gt 1000 ']' 00:09:06.841 18:23:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 10000 Malloc_0 00:09:06.841 18:23:52 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.841 18:23:52 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:06.841 18:23:52 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.841 18:23:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@423 -- # run_test bdev_qos_iops run_qos_test 10000 IOPS Malloc_0 00:09:06.841 18:23:52 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:06.841 18:23:52 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.841 18:23:52 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:06.841 ************************************ 00:09:06.841 START TEST bdev_qos_iops 00:09:06.841 ************************************ 00:09:06.841 18:23:52 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1123 -- # run_qos_test 10000 IOPS Malloc_0 00:09:06.841 18:23:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_limit=10000 00:09:06.841 18:23:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@390 -- # local qos_result=0 00:09:06.841 18:23:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # get_io_result IOPS Malloc_0 00:09:06.841 18:23:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:09:06.841 18:23:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:09:06.841 18:23:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # local iostat_result 00:09:06.841 18:23:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:09:06.841 18:23:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:09:06.841 18:23:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # tail -1 00:09:12.113 18:23:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 10003.59 40014.37 0.00 0.00 40800.00 0.00 0.00 ' 00:09:12.113 18:23:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:09:12.113 18:23:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:09:12.113 18:23:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # iostat_result=10003.59 00:09:12.113 18:23:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@385 -- # echo 10003 00:09:12.113 18:23:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # qos_result=10003 00:09:12.113 18:23:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@393 -- # '[' IOPS = BANDWIDTH ']' 00:09:12.113 18:23:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # lower_limit=9000 00:09:12.113 18:23:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@397 -- # upper_limit=11000 00:09:12.113 18:23:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 10003 -lt 9000 ']' 00:09:12.113 18:23:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 10003 -gt 11000 ']' 00:09:12.113 00:09:12.113 real 0m5.268s 00:09:12.113 user 0m0.119s 00:09:12.113 sys 0m0.045s 00:09:12.113 18:23:57 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:12.113 18:23:57 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:09:12.113 ************************************ 00:09:12.113 END TEST bdev_qos_iops 00:09:12.113 ************************************ 00:09:12.113 18:23:57 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:09:12.113 18:23:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # get_io_result BANDWIDTH Null_1 00:09:12.113 18:23:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:09:12.113 18:23:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:09:12.113 18:23:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:09:12.113 18:23:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:09:12.113 18:23:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Null_1 00:09:12.113 18:23:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:09:17.386 18:24:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 14759.98 59039.93 0.00 0.00 60416.00 0.00 0.00 ' 00:09:17.386 18:24:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:09:17.386 18:24:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:17.386 18:24:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:09:17.386 18:24:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # iostat_result=60416.00 00:09:17.386 18:24:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 60416 00:09:17.386 18:24:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=60416 00:09:17.386 18:24:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # bw_limit=5 00:09:17.386 18:24:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@429 -- # '[' 5 -lt 2 ']' 00:09:17.386 18:24:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 5 Null_1 00:09:17.386 18:24:02 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.386 18:24:02 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:17.386 18:24:02 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.386 18:24:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@433 -- # run_test bdev_qos_bw run_qos_test 5 BANDWIDTH Null_1 00:09:17.386 18:24:02 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:17.386 18:24:02 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:17.386 18:24:02 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:17.386 ************************************ 00:09:17.386 START TEST bdev_qos_bw 00:09:17.386 ************************************ 00:09:17.386 18:24:02 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1123 -- # run_qos_test 5 BANDWIDTH Null_1 00:09:17.386 18:24:02 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_limit=5 00:09:17.386 18:24:02 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:09:17.386 18:24:02 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Null_1 00:09:17.386 18:24:02 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:09:17.386 18:24:02 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:09:17.386 18:24:02 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:09:17.386 18:24:02 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:09:17.386 18:24:02 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # grep Null_1 00:09:17.386 18:24:02 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # tail -1 00:09:22.659 18:24:08 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 1279.54 5118.16 0.00 0.00 5232.00 0.00 0.00 ' 00:09:22.659 18:24:08 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:09:22.659 18:24:08 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:22.659 18:24:08 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:09:22.659 18:24:08 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # iostat_result=5232.00 00:09:22.659 18:24:08 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@385 -- # echo 5232 00:09:22.659 18:24:08 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # qos_result=5232 00:09:22.659 18:24:08 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:22.659 18:24:08 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@394 -- # qos_limit=5120 00:09:22.659 18:24:08 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # lower_limit=4608 00:09:22.659 18:24:08 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@397 -- # upper_limit=5632 00:09:22.659 18:24:08 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 5232 -lt 4608 ']' 00:09:22.659 18:24:08 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 5232 -gt 5632 ']' 00:09:22.659 00:09:22.659 real 0m5.316s 00:09:22.659 user 0m0.129s 00:09:22.659 sys 0m0.034s 00:09:22.659 18:24:08 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:22.659 18:24:08 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:09:22.659 ************************************ 00:09:22.659 END TEST bdev_qos_bw 00:09:22.659 ************************************ 00:09:22.659 18:24:08 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:09:22.659 18:24:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:09:22.659 18:24:08 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.659 18:24:08 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:22.919 18:24:08 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.919 18:24:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@437 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:09:22.919 18:24:08 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:22.919 18:24:08 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:22.919 18:24:08 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:22.919 ************************************ 00:09:22.919 START TEST bdev_qos_ro_bw 00:09:22.919 ************************************ 00:09:22.919 18:24:08 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1123 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:09:22.919 18:24:08 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_limit=2 00:09:22.919 18:24:08 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:09:22.919 18:24:08 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Malloc_0 00:09:22.919 18:24:08 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:09:22.919 18:24:08 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:09:22.919 18:24:08 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:09:22.919 18:24:08 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:09:22.919 18:24:08 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:09:22.919 18:24:08 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # tail -1 00:09:28.218 18:24:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 511.28 2045.12 0.00 0.00 2056.00 0.00 0.00 ' 00:09:28.218 18:24:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:09:28.218 18:24:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:28.218 18:24:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:09:28.218 18:24:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # iostat_result=2056.00 00:09:28.218 18:24:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@385 -- # echo 2056 00:09:28.218 18:24:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # qos_result=2056 00:09:28.218 18:24:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:28.218 18:24:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@394 -- # qos_limit=2048 00:09:28.218 18:24:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # lower_limit=1843 00:09:28.218 18:24:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@397 -- # upper_limit=2252 00:09:28.218 18:24:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2056 -lt 1843 ']' 00:09:28.218 18:24:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2056 -gt 2252 ']' 00:09:28.219 00:09:28.219 real 0m5.191s 00:09:28.219 user 0m0.118s 00:09:28.219 sys 0m0.045s 00:09:28.219 18:24:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:28.219 18:24:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:09:28.219 ************************************ 00:09:28.219 END TEST bdev_qos_ro_bw 00:09:28.219 ************************************ 00:09:28.219 18:24:13 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:09:28.219 18:24:13 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:09:28.219 18:24:13 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.219 18:24:13 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:28.787 18:24:14 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.787 18:24:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # rpc_cmd bdev_null_delete Null_1 00:09:28.787 18:24:14 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.787 18:24:14 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:28.787 00:09:28.787 Latency(us) 00:09:28.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:28.787 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:09:28.787 Malloc_0 : 26.85 14334.14 55.99 0.00 0.00 17689.43 2777.48 503316.48 00:09:28.787 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:09:28.787 Null_1 : 27.07 14533.91 56.77 0.00 0.00 17537.71 1131.28 228689.43 00:09:28.787 =================================================================================================================== 00:09:28.787 Total : 28868.05 112.77 0.00 0.00 17612.73 1131.28 503316.48 00:09:28.787 0 00:09:28.787 18:24:14 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.787 18:24:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # killprocess 2745486 00:09:28.787 18:24:14 blockdev_general.bdev_qos -- common/autotest_common.sh@948 -- # '[' -z 2745486 ']' 00:09:28.787 18:24:14 blockdev_general.bdev_qos -- common/autotest_common.sh@952 -- # kill -0 2745486 00:09:28.787 18:24:14 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # uname 00:09:28.787 18:24:14 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:28.787 18:24:14 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2745486 00:09:28.787 18:24:14 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:28.787 18:24:14 blockdev_general.bdev_qos -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:28.787 18:24:14 blockdev_general.bdev_qos -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2745486' 00:09:28.787 killing process with pid 2745486 00:09:28.787 18:24:14 blockdev_general.bdev_qos -- common/autotest_common.sh@967 -- # kill 2745486 00:09:28.787 Received shutdown signal, test time was about 27.128613 seconds 00:09:28.787 00:09:28.787 Latency(us) 00:09:28.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:28.787 =================================================================================================================== 00:09:28.787 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:28.787 18:24:14 blockdev_general.bdev_qos -- common/autotest_common.sh@972 -- # wait 2745486 00:09:29.046 18:24:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@462 -- # trap - SIGINT SIGTERM EXIT 00:09:29.046 00:09:29.046 real 0m28.146s 00:09:29.046 user 0m28.929s 00:09:29.046 sys 0m0.708s 00:09:29.046 18:24:14 blockdev_general.bdev_qos -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:29.046 18:24:14 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:29.046 ************************************ 00:09:29.046 END TEST bdev_qos 00:09:29.046 ************************************ 00:09:29.046 18:24:14 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:09:29.046 18:24:14 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:09:29.046 18:24:14 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:29.046 18:24:14 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:29.046 18:24:14 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:29.046 ************************************ 00:09:29.046 START TEST bdev_qd_sampling 00:09:29.046 ************************************ 00:09:29.046 18:24:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1123 -- # qd_sampling_test_suite '' 00:09:29.046 18:24:14 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@538 -- # QD_DEV=Malloc_QD 00:09:29.046 18:24:14 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # QD_PID=2750870 00:09:29.046 18:24:14 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # echo 'Process bdev QD sampling period testing pid: 2750870' 00:09:29.046 Process bdev QD sampling period testing pid: 2750870 00:09:29.046 18:24:14 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:09:29.046 18:24:14 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:09:29.046 18:24:14 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@544 -- # waitforlisten 2750870 00:09:29.046 18:24:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@829 -- # '[' -z 2750870 ']' 00:09:29.046 18:24:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.046 18:24:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:29.046 18:24:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.046 18:24:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:29.046 18:24:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:29.305 [2024-07-15 18:24:14.610220] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:09:29.305 [2024-07-15 18:24:14.610285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2750870 ] 00:09:29.305 [2024-07-15 18:24:14.708361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:29.305 [2024-07-15 18:24:14.804364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.305 [2024-07-15 18:24:14.804371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.242 18:24:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:30.242 18:24:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@862 -- # return 0 00:09:30.242 18:24:15 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:09:30.242 18:24:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.242 18:24:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:30.242 Malloc_QD 00:09:30.242 18:24:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.242 18:24:15 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@547 -- # waitforbdev Malloc_QD 00:09:30.242 18:24:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_QD 00:09:30.242 18:24:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:30.242 18:24:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local i 00:09:30.242 18:24:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:30.242 18:24:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:30.242 18:24:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:09:30.242 18:24:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.242 18:24:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:30.242 18:24:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.242 18:24:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:09:30.242 18:24:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.242 18:24:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:30.242 [ 00:09:30.242 { 00:09:30.242 "name": "Malloc_QD", 00:09:30.242 "aliases": [ 00:09:30.242 "e22529c9-e6b8-4b35-943e-fae20545ae4c" 00:09:30.242 ], 00:09:30.242 "product_name": "Malloc disk", 00:09:30.242 "block_size": 512, 00:09:30.242 "num_blocks": 262144, 00:09:30.242 "uuid": "e22529c9-e6b8-4b35-943e-fae20545ae4c", 00:09:30.242 "assigned_rate_limits": { 00:09:30.242 "rw_ios_per_sec": 0, 00:09:30.242 "rw_mbytes_per_sec": 0, 00:09:30.242 "r_mbytes_per_sec": 0, 00:09:30.242 "w_mbytes_per_sec": 0 00:09:30.242 }, 00:09:30.242 "claimed": false, 00:09:30.242 "zoned": false, 00:09:30.242 "supported_io_types": { 00:09:30.242 "read": true, 00:09:30.242 "write": true, 00:09:30.242 "unmap": true, 00:09:30.242 "flush": true, 00:09:30.242 "reset": true, 00:09:30.242 "nvme_admin": false, 00:09:30.242 "nvme_io": false, 00:09:30.242 "nvme_io_md": false, 00:09:30.242 "write_zeroes": true, 00:09:30.242 "zcopy": true, 00:09:30.242 "get_zone_info": false, 00:09:30.242 "zone_management": false, 00:09:30.242 "zone_append": false, 00:09:30.242 "compare": false, 00:09:30.242 "compare_and_write": false, 00:09:30.242 "abort": true, 00:09:30.242 "seek_hole": false, 00:09:30.242 "seek_data": false, 00:09:30.242 "copy": true, 00:09:30.242 "nvme_iov_md": false 00:09:30.242 }, 00:09:30.242 "memory_domains": [ 00:09:30.242 { 00:09:30.242 "dma_device_id": "system", 00:09:30.242 "dma_device_type": 1 00:09:30.242 }, 00:09:30.242 { 00:09:30.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.242 "dma_device_type": 2 00:09:30.242 } 00:09:30.242 ], 00:09:30.242 "driver_specific": {} 00:09:30.242 } 00:09:30.242 ] 00:09:30.242 18:24:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.242 18:24:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@905 -- # return 0 00:09:30.242 18:24:15 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # sleep 2 00:09:30.242 18:24:15 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:30.242 Running I/O for 5 seconds... 00:09:32.144 18:24:17 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@551 -- # qd_sampling_function_test Malloc_QD 00:09:32.144 18:24:17 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local bdev_name=Malloc_QD 00:09:32.144 18:24:17 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local sampling_period=10 00:09:32.144 18:24:17 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@521 -- # local iostats 00:09:32.144 18:24:17 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:09:32.144 18:24:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.144 18:24:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:32.144 18:24:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.144 18:24:17 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:09:32.144 18:24:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.144 18:24:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:32.144 18:24:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.144 18:24:17 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # iostats='{ 00:09:32.144 "tick_rate": 2100000000, 00:09:32.144 "ticks": 10180560957969390, 00:09:32.144 "bdevs": [ 00:09:32.144 { 00:09:32.144 "name": "Malloc_QD", 00:09:32.144 "bytes_read": 663794176, 00:09:32.144 "num_read_ops": 162052, 00:09:32.144 "bytes_written": 0, 00:09:32.144 "num_write_ops": 0, 00:09:32.144 "bytes_unmapped": 0, 00:09:32.144 "num_unmap_ops": 0, 00:09:32.144 "bytes_copied": 0, 00:09:32.144 "num_copy_ops": 0, 00:09:32.144 "read_latency_ticks": 2046760936588, 00:09:32.144 "max_read_latency_ticks": 16711304, 00:09:32.144 "min_read_latency_ticks": 223208, 00:09:32.144 "write_latency_ticks": 0, 00:09:32.144 "max_write_latency_ticks": 0, 00:09:32.144 "min_write_latency_ticks": 0, 00:09:32.144 "unmap_latency_ticks": 0, 00:09:32.144 "max_unmap_latency_ticks": 0, 00:09:32.144 "min_unmap_latency_ticks": 0, 00:09:32.144 "copy_latency_ticks": 0, 00:09:32.144 "max_copy_latency_ticks": 0, 00:09:32.144 "min_copy_latency_ticks": 0, 00:09:32.144 "io_error": {}, 00:09:32.144 "queue_depth_polling_period": 10, 00:09:32.144 "queue_depth": 512, 00:09:32.144 "io_time": 30, 00:09:32.144 "weighted_io_time": 15360 00:09:32.144 } 00:09:32.144 ] 00:09:32.144 }' 00:09:32.144 18:24:17 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:09:32.403 18:24:17 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # qd_sampling_period=10 00:09:32.403 18:24:17 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 == null ']' 00:09:32.403 18:24:17 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 -ne 10 ']' 00:09:32.403 18:24:17 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:09:32.403 18:24:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.403 18:24:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:32.403 00:09:32.403 Latency(us) 00:09:32.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.403 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:09:32.403 Malloc_QD : 1.98 47691.36 186.29 0.00 0.00 5354.17 1513.57 5679.79 00:09:32.403 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:09:32.403 Malloc_QD : 1.98 37320.08 145.78 0.00 0.00 6840.52 1334.13 7957.94 00:09:32.403 =================================================================================================================== 00:09:32.403 Total : 85011.44 332.08 0.00 0.00 6006.98 1334.13 7957.94 00:09:32.403 0 00:09:32.403 18:24:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.403 18:24:17 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # killprocess 2750870 00:09:32.403 18:24:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@948 -- # '[' -z 2750870 ']' 00:09:32.403 18:24:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@952 -- # kill -0 2750870 00:09:32.403 18:24:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # uname 00:09:32.403 18:24:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:32.403 18:24:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2750870 00:09:32.403 18:24:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:32.403 18:24:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:32.403 18:24:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2750870' 00:09:32.403 killing process with pid 2750870 00:09:32.403 18:24:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@967 -- # kill 2750870 00:09:32.403 Received shutdown signal, test time was about 2.055485 seconds 00:09:32.403 00:09:32.403 Latency(us) 00:09:32.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.403 =================================================================================================================== 00:09:32.403 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:32.403 18:24:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@972 -- # wait 2750870 00:09:32.662 18:24:18 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@555 -- # trap - SIGINT SIGTERM EXIT 00:09:32.663 00:09:32.663 real 0m3.454s 00:09:32.663 user 0m6.929s 00:09:32.663 sys 0m0.346s 00:09:32.663 18:24:18 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:32.663 18:24:18 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:32.663 ************************************ 00:09:32.663 END TEST bdev_qd_sampling 00:09:32.663 ************************************ 00:09:32.663 18:24:18 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:09:32.663 18:24:18 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_error error_test_suite '' 00:09:32.663 18:24:18 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:32.663 18:24:18 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:32.663 18:24:18 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:32.663 ************************************ 00:09:32.663 START TEST bdev_error 00:09:32.663 ************************************ 00:09:32.663 18:24:18 blockdev_general.bdev_error -- common/autotest_common.sh@1123 -- # error_test_suite '' 00:09:32.663 18:24:18 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_1=Dev_1 00:09:32.663 18:24:18 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # DEV_2=Dev_2 00:09:32.663 18:24:18 blockdev_general.bdev_error -- bdev/blockdev.sh@468 -- # ERR_DEV=EE_Dev_1 00:09:32.663 18:24:18 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # ERR_PID=2751501 00:09:32.663 18:24:18 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # echo 'Process error testing pid: 2751501' 00:09:32.663 Process error testing pid: 2751501 00:09:32.663 18:24:18 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:09:32.663 18:24:18 blockdev_general.bdev_error -- bdev/blockdev.sh@474 -- # waitforlisten 2751501 00:09:32.663 18:24:18 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 2751501 ']' 00:09:32.663 18:24:18 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.663 18:24:18 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:32.663 18:24:18 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.663 18:24:18 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:32.663 18:24:18 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:32.663 [2024-07-15 18:24:18.103561] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:09:32.663 [2024-07-15 18:24:18.103623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2751501 ] 00:09:32.663 [2024-07-15 18:24:18.206756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.922 [2024-07-15 18:24:18.319662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.858 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:33.858 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:09:33.858 18:24:19 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:09:33.858 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.858 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:33.858 Dev_1 00:09:33.858 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.858 18:24:19 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # waitforbdev Dev_1 00:09:33.858 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:09:33.858 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:33.858 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:09:33.858 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:33.858 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:33.858 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:09:33.858 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.858 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:33.858 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.858 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:09:33.858 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.858 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:33.858 [ 00:09:33.858 { 00:09:33.858 "name": "Dev_1", 00:09:33.858 "aliases": [ 00:09:33.858 "7a0cbbec-3293-43c0-ae89-a0e6fdbabb63" 00:09:33.858 ], 00:09:33.858 "product_name": "Malloc disk", 00:09:33.858 "block_size": 512, 00:09:33.858 "num_blocks": 262144, 00:09:33.858 "uuid": "7a0cbbec-3293-43c0-ae89-a0e6fdbabb63", 00:09:33.858 "assigned_rate_limits": { 00:09:33.858 "rw_ios_per_sec": 0, 00:09:33.858 "rw_mbytes_per_sec": 0, 00:09:33.858 "r_mbytes_per_sec": 0, 00:09:33.858 "w_mbytes_per_sec": 0 00:09:33.858 }, 00:09:33.858 "claimed": false, 00:09:33.858 "zoned": false, 00:09:33.858 "supported_io_types": { 00:09:33.858 "read": true, 00:09:33.858 "write": true, 00:09:33.858 "unmap": true, 00:09:33.858 "flush": true, 00:09:33.858 "reset": true, 00:09:33.858 "nvme_admin": false, 00:09:33.858 "nvme_io": false, 00:09:33.858 "nvme_io_md": false, 00:09:33.858 "write_zeroes": true, 00:09:33.858 "zcopy": true, 00:09:33.858 "get_zone_info": false, 00:09:33.858 "zone_management": false, 00:09:33.858 "zone_append": false, 00:09:33.858 "compare": false, 00:09:33.858 "compare_and_write": false, 00:09:33.858 "abort": true, 00:09:33.858 "seek_hole": false, 00:09:33.858 "seek_data": false, 00:09:33.858 "copy": true, 00:09:33.858 "nvme_iov_md": false 00:09:33.858 }, 00:09:33.858 "memory_domains": [ 00:09:33.858 { 00:09:33.858 "dma_device_id": "system", 00:09:33.858 "dma_device_type": 1 00:09:33.858 }, 00:09:33.858 { 00:09:33.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.858 "dma_device_type": 2 00:09:33.858 } 00:09:33.858 ], 00:09:33.858 "driver_specific": {} 00:09:33.858 } 00:09:33.858 ] 00:09:33.858 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.858 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:09:33.858 18:24:19 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_error_create Dev_1 00:09:33.858 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.858 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:33.858 true 00:09:33.858 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.858 18:24:19 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:09:33.858 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.858 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:34.117 Dev_2 00:09:34.117 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.117 18:24:19 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # waitforbdev Dev_2 00:09:34.117 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:09:34.117 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:34.117 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:09:34.117 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:34.117 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:34.117 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:09:34.117 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.117 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:34.117 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.117 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:09:34.117 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.118 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:34.118 [ 00:09:34.118 { 00:09:34.118 "name": "Dev_2", 00:09:34.118 "aliases": [ 00:09:34.118 "61bd4f11-e474-4297-b405-3dd0d4148008" 00:09:34.118 ], 00:09:34.118 "product_name": "Malloc disk", 00:09:34.118 "block_size": 512, 00:09:34.118 "num_blocks": 262144, 00:09:34.118 "uuid": "61bd4f11-e474-4297-b405-3dd0d4148008", 00:09:34.118 "assigned_rate_limits": { 00:09:34.118 "rw_ios_per_sec": 0, 00:09:34.118 "rw_mbytes_per_sec": 0, 00:09:34.118 "r_mbytes_per_sec": 0, 00:09:34.118 "w_mbytes_per_sec": 0 00:09:34.118 }, 00:09:34.118 "claimed": false, 00:09:34.118 "zoned": false, 00:09:34.118 "supported_io_types": { 00:09:34.118 "read": true, 00:09:34.118 "write": true, 00:09:34.118 "unmap": true, 00:09:34.118 "flush": true, 00:09:34.118 "reset": true, 00:09:34.118 "nvme_admin": false, 00:09:34.118 "nvme_io": false, 00:09:34.118 "nvme_io_md": false, 00:09:34.118 "write_zeroes": true, 00:09:34.118 "zcopy": true, 00:09:34.118 "get_zone_info": false, 00:09:34.118 "zone_management": false, 00:09:34.118 "zone_append": false, 00:09:34.118 "compare": false, 00:09:34.118 "compare_and_write": false, 00:09:34.118 "abort": true, 00:09:34.118 "seek_hole": false, 00:09:34.118 "seek_data": false, 00:09:34.118 "copy": true, 00:09:34.118 "nvme_iov_md": false 00:09:34.118 }, 00:09:34.118 "memory_domains": [ 00:09:34.118 { 00:09:34.118 "dma_device_id": "system", 00:09:34.118 "dma_device_type": 1 00:09:34.118 }, 00:09:34.118 { 00:09:34.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.118 "dma_device_type": 2 00:09:34.118 } 00:09:34.118 ], 00:09:34.118 "driver_specific": {} 00:09:34.118 } 00:09:34.118 ] 00:09:34.118 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.118 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:09:34.118 18:24:19 blockdev_general.bdev_error -- bdev/blockdev.sh@481 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:09:34.118 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.118 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:34.118 18:24:19 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.118 18:24:19 blockdev_general.bdev_error -- bdev/blockdev.sh@484 -- # sleep 1 00:09:34.118 18:24:19 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:09:34.376 Running I/O for 5 seconds... 00:09:34.943 18:24:20 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # kill -0 2751501 00:09:34.943 18:24:20 blockdev_general.bdev_error -- bdev/blockdev.sh@488 -- # echo 'Process is existed as continue on error is set. Pid: 2751501' 00:09:34.943 Process is existed as continue on error is set. Pid: 2751501 00:09:34.943 18:24:20 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:09:34.943 18:24:20 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.943 18:24:20 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:34.943 18:24:20 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.943 18:24:20 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # rpc_cmd bdev_malloc_delete Dev_1 00:09:34.943 18:24:20 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.943 18:24:20 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:35.202 18:24:20 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.202 18:24:20 blockdev_general.bdev_error -- bdev/blockdev.sh@497 -- # sleep 5 00:09:35.202 Timeout while waiting for response: 00:09:35.202 00:09:35.202 00:09:39.394 00:09:39.394 Latency(us) 00:09:39.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.394 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:09:39.394 EE_Dev_1 : 0.79 25992.41 101.53 6.35 0.00 610.32 197.00 1365.33 00:09:39.394 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:09:39.394 Dev_2 : 5.00 57155.55 223.26 0.00 0.00 274.92 93.14 20721.86 00:09:39.394 =================================================================================================================== 00:09:39.394 Total : 83147.96 324.80 6.35 0.00 297.34 93.14 20721.86 00:09:40.331 18:24:25 blockdev_general.bdev_error -- bdev/blockdev.sh@499 -- # killprocess 2751501 00:09:40.331 18:24:25 blockdev_general.bdev_error -- common/autotest_common.sh@948 -- # '[' -z 2751501 ']' 00:09:40.331 18:24:25 blockdev_general.bdev_error -- common/autotest_common.sh@952 -- # kill -0 2751501 00:09:40.331 18:24:25 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # uname 00:09:40.331 18:24:25 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:40.331 18:24:25 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2751501 00:09:40.331 18:24:25 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:40.331 18:24:25 blockdev_general.bdev_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:40.331 18:24:25 blockdev_general.bdev_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2751501' 00:09:40.331 killing process with pid 2751501 00:09:40.331 18:24:25 blockdev_general.bdev_error -- common/autotest_common.sh@967 -- # kill 2751501 00:09:40.331 Received shutdown signal, test time was about 5.000000 seconds 00:09:40.331 00:09:40.331 Latency(us) 00:09:40.331 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.331 =================================================================================================================== 00:09:40.331 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:40.331 18:24:25 blockdev_general.bdev_error -- common/autotest_common.sh@972 -- # wait 2751501 00:09:40.590 18:24:25 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # ERR_PID=2752612 00:09:40.590 18:24:25 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # echo 'Process error testing pid: 2752612' 00:09:40.590 Process error testing pid: 2752612 00:09:40.590 18:24:25 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:09:40.590 18:24:25 blockdev_general.bdev_error -- bdev/blockdev.sh@505 -- # waitforlisten 2752612 00:09:40.590 18:24:25 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 2752612 ']' 00:09:40.590 18:24:25 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.590 18:24:25 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:40.590 18:24:25 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.590 18:24:25 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:40.590 18:24:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:40.590 [2024-07-15 18:24:26.012230] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:09:40.590 [2024-07-15 18:24:26.012344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2752612 ] 00:09:40.848 [2024-07-15 18:24:26.155790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.848 [2024-07-15 18:24:26.272543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.416 18:24:26 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:41.416 18:24:26 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:09:41.416 18:24:26 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:09:41.416 18:24:26 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.416 18:24:26 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:41.674 Dev_1 00:09:41.674 18:24:26 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.674 18:24:26 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # waitforbdev Dev_1 00:09:41.674 18:24:26 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:09:41.674 18:24:26 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:41.674 18:24:26 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:09:41.674 18:24:26 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:41.674 18:24:26 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:41.674 18:24:26 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:09:41.674 18:24:26 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.674 18:24:26 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:41.674 18:24:26 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.674 18:24:26 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:09:41.674 18:24:26 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.674 18:24:26 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:41.674 [ 00:09:41.674 { 00:09:41.674 "name": "Dev_1", 00:09:41.674 "aliases": [ 00:09:41.674 "d9ab7a9a-b241-489a-9070-22a3982d59f4" 00:09:41.674 ], 00:09:41.674 "product_name": "Malloc disk", 00:09:41.674 "block_size": 512, 00:09:41.674 "num_blocks": 262144, 00:09:41.674 "uuid": "d9ab7a9a-b241-489a-9070-22a3982d59f4", 00:09:41.674 "assigned_rate_limits": { 00:09:41.674 "rw_ios_per_sec": 0, 00:09:41.674 "rw_mbytes_per_sec": 0, 00:09:41.674 "r_mbytes_per_sec": 0, 00:09:41.674 "w_mbytes_per_sec": 0 00:09:41.674 }, 00:09:41.674 "claimed": false, 00:09:41.674 "zoned": false, 00:09:41.674 "supported_io_types": { 00:09:41.674 "read": true, 00:09:41.674 "write": true, 00:09:41.674 "unmap": true, 00:09:41.674 "flush": true, 00:09:41.674 "reset": true, 00:09:41.674 "nvme_admin": false, 00:09:41.674 "nvme_io": false, 00:09:41.674 "nvme_io_md": false, 00:09:41.674 "write_zeroes": true, 00:09:41.674 "zcopy": true, 00:09:41.674 "get_zone_info": false, 00:09:41.674 "zone_management": false, 00:09:41.674 "zone_append": false, 00:09:41.674 "compare": false, 00:09:41.674 "compare_and_write": false, 00:09:41.674 "abort": true, 00:09:41.674 "seek_hole": false, 00:09:41.674 "seek_data": false, 00:09:41.674 "copy": true, 00:09:41.674 "nvme_iov_md": false 00:09:41.674 }, 00:09:41.674 "memory_domains": [ 00:09:41.674 { 00:09:41.674 "dma_device_id": "system", 00:09:41.675 "dma_device_type": 1 00:09:41.675 }, 00:09:41.675 { 00:09:41.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.675 "dma_device_type": 2 00:09:41.675 } 00:09:41.675 ], 00:09:41.675 "driver_specific": {} 00:09:41.675 } 00:09:41.675 ] 00:09:41.675 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.675 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:09:41.675 18:24:27 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_error_create Dev_1 00:09:41.675 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.675 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:41.675 true 00:09:41.675 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.675 18:24:27 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:09:41.675 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.675 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:41.675 Dev_2 00:09:41.675 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.675 18:24:27 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # waitforbdev Dev_2 00:09:41.675 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:09:41.675 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:41.675 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:09:41.675 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:41.675 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:41.675 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:09:41.675 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.675 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:41.675 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.675 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:09:41.675 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.675 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:41.675 [ 00:09:41.675 { 00:09:41.675 "name": "Dev_2", 00:09:41.675 "aliases": [ 00:09:41.675 "66e04ee9-3431-494e-8149-02547f942b30" 00:09:41.675 ], 00:09:41.675 "product_name": "Malloc disk", 00:09:41.675 "block_size": 512, 00:09:41.675 "num_blocks": 262144, 00:09:41.675 "uuid": "66e04ee9-3431-494e-8149-02547f942b30", 00:09:41.675 "assigned_rate_limits": { 00:09:41.675 "rw_ios_per_sec": 0, 00:09:41.675 "rw_mbytes_per_sec": 0, 00:09:41.675 "r_mbytes_per_sec": 0, 00:09:41.675 "w_mbytes_per_sec": 0 00:09:41.675 }, 00:09:41.675 "claimed": false, 00:09:41.675 "zoned": false, 00:09:41.675 "supported_io_types": { 00:09:41.675 "read": true, 00:09:41.675 "write": true, 00:09:41.675 "unmap": true, 00:09:41.675 "flush": true, 00:09:41.675 "reset": true, 00:09:41.675 "nvme_admin": false, 00:09:41.675 "nvme_io": false, 00:09:41.675 "nvme_io_md": false, 00:09:41.675 "write_zeroes": true, 00:09:41.675 "zcopy": true, 00:09:41.675 "get_zone_info": false, 00:09:41.675 "zone_management": false, 00:09:41.675 "zone_append": false, 00:09:41.675 "compare": false, 00:09:41.675 "compare_and_write": false, 00:09:41.675 "abort": true, 00:09:41.675 "seek_hole": false, 00:09:41.675 "seek_data": false, 00:09:41.675 "copy": true, 00:09:41.675 "nvme_iov_md": false 00:09:41.675 }, 00:09:41.675 "memory_domains": [ 00:09:41.675 { 00:09:41.675 "dma_device_id": "system", 00:09:41.675 "dma_device_type": 1 00:09:41.675 }, 00:09:41.675 { 00:09:41.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.675 "dma_device_type": 2 00:09:41.675 } 00:09:41.675 ], 00:09:41.675 "driver_specific": {} 00:09:41.675 } 00:09:41.675 ] 00:09:41.675 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.675 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:09:41.675 18:24:27 blockdev_general.bdev_error -- bdev/blockdev.sh@512 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:09:41.675 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.675 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:41.675 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.675 18:24:27 blockdev_general.bdev_error -- bdev/blockdev.sh@515 -- # NOT wait 2752612 00:09:41.675 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@648 -- # local es=0 00:09:41.675 18:24:27 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:09:41.675 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2752612 00:09:41.675 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@636 -- # local arg=wait 00:09:41.675 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:41.675 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # type -t wait 00:09:41.675 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:41.675 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # wait 2752612 00:09:41.934 Running I/O for 5 seconds... 00:09:41.934 task offset: 5000 on job bdev=EE_Dev_1 fails 00:09:41.934 00:09:41.934 Latency(us) 00:09:41.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.934 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:09:41.934 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:09:41.934 EE_Dev_1 : 0.00 21400.78 83.60 4863.81 0.00 509.60 179.44 901.12 00:09:41.934 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:09:41.934 Dev_2 : 0.00 12898.02 50.38 0.00 0.00 934.08 174.57 1739.82 00:09:41.934 =================================================================================================================== 00:09:41.934 Total : 34298.80 133.98 4863.81 0.00 739.83 174.57 1739.82 00:09:41.934 [2024-07-15 18:24:27.236766] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:41.934 request: 00:09:41.934 { 00:09:41.934 "method": "perform_tests", 00:09:41.934 "req_id": 1 00:09:41.934 } 00:09:41.934 Got JSON-RPC error response 00:09:41.934 response: 00:09:41.934 { 00:09:41.934 "code": -32603, 00:09:41.934 "message": "bdevperf failed with error Operation not permitted" 00:09:41.934 } 00:09:42.193 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # es=255 00:09:42.193 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:42.193 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@660 -- # es=127 00:09:42.193 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # case "$es" in 00:09:42.193 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@668 -- # es=1 00:09:42.193 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:42.193 00:09:42.193 real 0m9.552s 00:09:42.193 user 0m10.356s 00:09:42.193 sys 0m0.869s 00:09:42.193 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:42.193 18:24:27 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:42.193 ************************************ 00:09:42.193 END TEST bdev_error 00:09:42.193 ************************************ 00:09:42.193 18:24:27 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:09:42.193 18:24:27 blockdev_general -- bdev/blockdev.sh@791 -- # run_test bdev_stat stat_test_suite '' 00:09:42.193 18:24:27 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:42.193 18:24:27 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:42.193 18:24:27 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:42.193 ************************************ 00:09:42.193 START TEST bdev_stat 00:09:42.193 ************************************ 00:09:42.193 18:24:27 blockdev_general.bdev_stat -- common/autotest_common.sh@1123 -- # stat_test_suite '' 00:09:42.193 18:24:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@592 -- # STAT_DEV=Malloc_STAT 00:09:42.193 18:24:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # STAT_PID=2752970 00:09:42.193 18:24:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # echo 'Process Bdev IO statistics testing pid: 2752970' 00:09:42.193 Process Bdev IO statistics testing pid: 2752970 00:09:42.193 18:24:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:09:42.193 18:24:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:09:42.193 18:24:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@599 -- # waitforlisten 2752970 00:09:42.193 18:24:27 blockdev_general.bdev_stat -- common/autotest_common.sh@829 -- # '[' -z 2752970 ']' 00:09:42.193 18:24:27 blockdev_general.bdev_stat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.193 18:24:27 blockdev_general.bdev_stat -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:42.193 18:24:27 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.193 18:24:27 blockdev_general.bdev_stat -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:42.193 18:24:27 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:42.193 [2024-07-15 18:24:27.704028] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:09:42.193 [2024-07-15 18:24:27.704094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2752970 ] 00:09:42.452 [2024-07-15 18:24:27.806350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:42.452 [2024-07-15 18:24:27.901895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.452 [2024-07-15 18:24:27.901901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.387 18:24:28 blockdev_general.bdev_stat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:43.387 18:24:28 blockdev_general.bdev_stat -- common/autotest_common.sh@862 -- # return 0 00:09:43.387 18:24:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:09:43.387 18:24:28 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.387 18:24:28 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:43.387 Malloc_STAT 00:09:43.387 18:24:28 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.387 18:24:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@602 -- # waitforbdev Malloc_STAT 00:09:43.387 18:24:28 blockdev_general.bdev_stat -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_STAT 00:09:43.387 18:24:28 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:43.387 18:24:28 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local i 00:09:43.387 18:24:28 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:43.387 18:24:28 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:43.387 18:24:28 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:09:43.387 18:24:28 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.387 18:24:28 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:43.387 18:24:28 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.387 18:24:28 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:09:43.387 18:24:28 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.387 18:24:28 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:43.387 [ 00:09:43.387 { 00:09:43.387 "name": "Malloc_STAT", 00:09:43.387 "aliases": [ 00:09:43.387 "ddbb3efe-e48d-4f80-a099-63e3dfc3ffdf" 00:09:43.387 ], 00:09:43.387 "product_name": "Malloc disk", 00:09:43.387 "block_size": 512, 00:09:43.388 "num_blocks": 262144, 00:09:43.388 "uuid": "ddbb3efe-e48d-4f80-a099-63e3dfc3ffdf", 00:09:43.388 "assigned_rate_limits": { 00:09:43.388 "rw_ios_per_sec": 0, 00:09:43.388 "rw_mbytes_per_sec": 0, 00:09:43.388 "r_mbytes_per_sec": 0, 00:09:43.388 "w_mbytes_per_sec": 0 00:09:43.388 }, 00:09:43.388 "claimed": false, 00:09:43.388 "zoned": false, 00:09:43.388 "supported_io_types": { 00:09:43.388 "read": true, 00:09:43.388 "write": true, 00:09:43.388 "unmap": true, 00:09:43.388 "flush": true, 00:09:43.388 "reset": true, 00:09:43.388 "nvme_admin": false, 00:09:43.388 "nvme_io": false, 00:09:43.388 "nvme_io_md": false, 00:09:43.388 "write_zeroes": true, 00:09:43.388 "zcopy": true, 00:09:43.388 "get_zone_info": false, 00:09:43.388 "zone_management": false, 00:09:43.388 "zone_append": false, 00:09:43.388 "compare": false, 00:09:43.388 "compare_and_write": false, 00:09:43.388 "abort": true, 00:09:43.388 "seek_hole": false, 00:09:43.388 "seek_data": false, 00:09:43.388 "copy": true, 00:09:43.388 "nvme_iov_md": false 00:09:43.388 }, 00:09:43.388 "memory_domains": [ 00:09:43.388 { 00:09:43.388 "dma_device_id": "system", 00:09:43.388 "dma_device_type": 1 00:09:43.388 }, 00:09:43.388 { 00:09:43.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.388 "dma_device_type": 2 00:09:43.388 } 00:09:43.388 ], 00:09:43.388 "driver_specific": {} 00:09:43.388 } 00:09:43.388 ] 00:09:43.388 18:24:28 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.388 18:24:28 blockdev_general.bdev_stat -- common/autotest_common.sh@905 -- # return 0 00:09:43.388 18:24:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # sleep 2 00:09:43.388 18:24:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:43.388 Running I/O for 10 seconds... 00:09:45.306 18:24:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@606 -- # stat_function_test Malloc_STAT 00:09:45.306 18:24:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local bdev_name=Malloc_STAT 00:09:45.306 18:24:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local iostats 00:09:45.306 18:24:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count1 00:09:45.306 18:24:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local io_count2 00:09:45.306 18:24:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local iostats_per_channel 00:09:45.306 18:24:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel1 00:09:45.306 18:24:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel2 00:09:45.306 18:24:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@566 -- # local io_count_per_channel_all=0 00:09:45.306 18:24:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:09:45.306 18:24:30 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.306 18:24:30 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:45.306 18:24:30 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.306 18:24:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # iostats='{ 00:09:45.306 "tick_rate": 2100000000, 00:09:45.306 "ticks": 10180588388537792, 00:09:45.306 "bdevs": [ 00:09:45.306 { 00:09:45.306 "name": "Malloc_STAT", 00:09:45.306 "bytes_read": 660648448, 00:09:45.306 "num_read_ops": 161284, 00:09:45.306 "bytes_written": 0, 00:09:45.306 "num_write_ops": 0, 00:09:45.306 "bytes_unmapped": 0, 00:09:45.306 "num_unmap_ops": 0, 00:09:45.306 "bytes_copied": 0, 00:09:45.306 "num_copy_ops": 0, 00:09:45.306 "read_latency_ticks": 2032818558146, 00:09:45.306 "max_read_latency_ticks": 17017800, 00:09:45.306 "min_read_latency_ticks": 235628, 00:09:45.306 "write_latency_ticks": 0, 00:09:45.306 "max_write_latency_ticks": 0, 00:09:45.306 "min_write_latency_ticks": 0, 00:09:45.306 "unmap_latency_ticks": 0, 00:09:45.306 "max_unmap_latency_ticks": 0, 00:09:45.306 "min_unmap_latency_ticks": 0, 00:09:45.306 "copy_latency_ticks": 0, 00:09:45.306 "max_copy_latency_ticks": 0, 00:09:45.306 "min_copy_latency_ticks": 0, 00:09:45.306 "io_error": {} 00:09:45.306 } 00:09:45.306 ] 00:09:45.306 }' 00:09:45.306 18:24:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # jq -r '.bdevs[0].num_read_ops' 00:09:45.306 18:24:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # io_count1=161284 00:09:45.306 18:24:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:09:45.306 18:24:30 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.306 18:24:30 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:45.306 18:24:30 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.306 18:24:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # iostats_per_channel='{ 00:09:45.306 "tick_rate": 2100000000, 00:09:45.306 "ticks": 10180588534442820, 00:09:45.306 "name": "Malloc_STAT", 00:09:45.306 "channels": [ 00:09:45.306 { 00:09:45.306 "thread_id": 2, 00:09:45.306 "bytes_read": 385875968, 00:09:45.306 "num_read_ops": 94208, 00:09:45.306 "bytes_written": 0, 00:09:45.306 "num_write_ops": 0, 00:09:45.306 "bytes_unmapped": 0, 00:09:45.306 "num_unmap_ops": 0, 00:09:45.306 "bytes_copied": 0, 00:09:45.306 "num_copy_ops": 0, 00:09:45.306 "read_latency_ticks": 1052876297480, 00:09:45.306 "max_read_latency_ticks": 12026530, 00:09:45.306 "min_read_latency_ticks": 8052680, 00:09:45.306 "write_latency_ticks": 0, 00:09:45.306 "max_write_latency_ticks": 0, 00:09:45.306 "min_write_latency_ticks": 0, 00:09:45.306 "unmap_latency_ticks": 0, 00:09:45.306 "max_unmap_latency_ticks": 0, 00:09:45.306 "min_unmap_latency_ticks": 0, 00:09:45.306 "copy_latency_ticks": 0, 00:09:45.306 "max_copy_latency_ticks": 0, 00:09:45.306 "min_copy_latency_ticks": 0 00:09:45.306 }, 00:09:45.306 { 00:09:45.306 "thread_id": 3, 00:09:45.306 "bytes_read": 298844160, 00:09:45.306 "num_read_ops": 72960, 00:09:45.306 "bytes_written": 0, 00:09:45.306 "num_write_ops": 0, 00:09:45.306 "bytes_unmapped": 0, 00:09:45.306 "num_unmap_ops": 0, 00:09:45.306 "bytes_copied": 0, 00:09:45.306 "num_copy_ops": 0, 00:09:45.306 "read_latency_ticks": 1054149114242, 00:09:45.306 "max_read_latency_ticks": 17017800, 00:09:45.306 "min_read_latency_ticks": 9445880, 00:09:45.306 "write_latency_ticks": 0, 00:09:45.306 "max_write_latency_ticks": 0, 00:09:45.306 "min_write_latency_ticks": 0, 00:09:45.306 "unmap_latency_ticks": 0, 00:09:45.306 "max_unmap_latency_ticks": 0, 00:09:45.306 "min_unmap_latency_ticks": 0, 00:09:45.306 "copy_latency_ticks": 0, 00:09:45.306 "max_copy_latency_ticks": 0, 00:09:45.306 "min_copy_latency_ticks": 0 00:09:45.306 } 00:09:45.306 ] 00:09:45.306 }' 00:09:45.306 18:24:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # jq -r '.channels[0].num_read_ops' 00:09:45.566 18:24:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel1=94208 00:09:45.566 18:24:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=94208 00:09:45.566 18:24:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # jq -r '.channels[1].num_read_ops' 00:09:45.566 18:24:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel2=72960 00:09:45.566 18:24:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@575 -- # io_count_per_channel_all=167168 00:09:45.566 18:24:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:09:45.566 18:24:30 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.566 18:24:30 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:45.566 18:24:30 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.566 18:24:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # iostats='{ 00:09:45.566 "tick_rate": 2100000000, 00:09:45.566 "ticks": 10180588781442976, 00:09:45.566 "bdevs": [ 00:09:45.566 { 00:09:45.566 "name": "Malloc_STAT", 00:09:45.566 "bytes_read": 725660160, 00:09:45.566 "num_read_ops": 177156, 00:09:45.566 "bytes_written": 0, 00:09:45.566 "num_write_ops": 0, 00:09:45.566 "bytes_unmapped": 0, 00:09:45.566 "num_unmap_ops": 0, 00:09:45.566 "bytes_copied": 0, 00:09:45.566 "num_copy_ops": 0, 00:09:45.566 "read_latency_ticks": 2233384289750, 00:09:45.566 "max_read_latency_ticks": 17017800, 00:09:45.566 "min_read_latency_ticks": 235628, 00:09:45.566 "write_latency_ticks": 0, 00:09:45.566 "max_write_latency_ticks": 0, 00:09:45.566 "min_write_latency_ticks": 0, 00:09:45.566 "unmap_latency_ticks": 0, 00:09:45.566 "max_unmap_latency_ticks": 0, 00:09:45.566 "min_unmap_latency_ticks": 0, 00:09:45.566 "copy_latency_ticks": 0, 00:09:45.566 "max_copy_latency_ticks": 0, 00:09:45.566 "min_copy_latency_ticks": 0, 00:09:45.566 "io_error": {} 00:09:45.566 } 00:09:45.566 ] 00:09:45.566 }' 00:09:45.566 18:24:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # jq -r '.bdevs[0].num_read_ops' 00:09:45.566 18:24:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # io_count2=177156 00:09:45.566 18:24:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 167168 -lt 161284 ']' 00:09:45.566 18:24:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 167168 -gt 177156 ']' 00:09:45.566 18:24:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:09:45.566 18:24:30 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.566 18:24:30 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:45.566 00:09:45.566 Latency(us) 00:09:45.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.566 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:09:45.566 Malloc_STAT : 2.15 48017.60 187.57 0.00 0.00 5318.25 1466.76 5742.20 00:09:45.566 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:09:45.566 Malloc_STAT : 2.16 37165.38 145.18 0.00 0.00 6869.91 1357.53 8113.98 00:09:45.566 =================================================================================================================== 00:09:45.566 Total : 85182.98 332.75 0.00 0.00 5995.61 1357.53 8113.98 00:09:45.566 0 00:09:45.566 18:24:31 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.566 18:24:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # killprocess 2752970 00:09:45.566 18:24:31 blockdev_general.bdev_stat -- common/autotest_common.sh@948 -- # '[' -z 2752970 ']' 00:09:45.566 18:24:31 blockdev_general.bdev_stat -- common/autotest_common.sh@952 -- # kill -0 2752970 00:09:45.566 18:24:31 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # uname 00:09:45.566 18:24:31 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:45.566 18:24:31 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2752970 00:09:45.566 18:24:31 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:45.566 18:24:31 blockdev_general.bdev_stat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:45.566 18:24:31 blockdev_general.bdev_stat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2752970' 00:09:45.566 killing process with pid 2752970 00:09:45.566 18:24:31 blockdev_general.bdev_stat -- common/autotest_common.sh@967 -- # kill 2752970 00:09:45.566 Received shutdown signal, test time was about 2.227124 seconds 00:09:45.566 00:09:45.566 Latency(us) 00:09:45.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.566 =================================================================================================================== 00:09:45.566 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:45.566 18:24:31 blockdev_general.bdev_stat -- common/autotest_common.sh@972 -- # wait 2752970 00:09:45.826 18:24:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@610 -- # trap - SIGINT SIGTERM EXIT 00:09:45.826 00:09:45.826 real 0m3.643s 00:09:45.826 user 0m7.415s 00:09:45.826 sys 0m0.403s 00:09:45.826 18:24:31 blockdev_general.bdev_stat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:45.826 18:24:31 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:45.826 ************************************ 00:09:45.826 END TEST bdev_stat 00:09:45.826 ************************************ 00:09:45.826 18:24:31 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:09:45.826 18:24:31 blockdev_general -- bdev/blockdev.sh@794 -- # [[ bdev == gpt ]] 00:09:45.826 18:24:31 blockdev_general -- bdev/blockdev.sh@798 -- # [[ bdev == crypto_sw ]] 00:09:45.826 18:24:31 blockdev_general -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:09:45.826 18:24:31 blockdev_general -- bdev/blockdev.sh@811 -- # cleanup 00:09:45.826 18:24:31 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:09:45.826 18:24:31 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:09:45.826 18:24:31 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:09:45.826 18:24:31 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:09:45.826 18:24:31 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:09:45.826 18:24:31 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:09:45.826 00:09:45.826 real 1m59.378s 00:09:45.826 user 7m42.424s 00:09:45.826 sys 0m26.018s 00:09:45.826 18:24:31 blockdev_general -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:45.826 18:24:31 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:45.826 ************************************ 00:09:45.826 END TEST blockdev_general 00:09:45.826 ************************************ 00:09:45.826 18:24:31 -- common/autotest_common.sh@1142 -- # return 0 00:09:45.826 18:24:31 -- spdk/autotest.sh@190 -- # run_test bdev_raid /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:09:45.826 18:24:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:45.826 18:24:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:45.826 18:24:31 -- common/autotest_common.sh@10 -- # set +x 00:09:45.826 ************************************ 00:09:45.826 START TEST bdev_raid 00:09:45.826 ************************************ 00:09:45.826 18:24:31 bdev_raid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:09:46.084 * Looking for test storage... 00:09:46.084 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:09:46.084 18:24:31 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:09:46.084 18:24:31 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:09:46.084 18:24:31 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:09:46.084 18:24:31 bdev_raid -- bdev/bdev_raid.sh@851 -- # mkdir -p /raidtest 00:09:46.084 18:24:31 bdev_raid -- bdev/bdev_raid.sh@852 -- # trap 'cleanup; exit 1' EXIT 00:09:46.084 18:24:31 bdev_raid -- bdev/bdev_raid.sh@854 -- # base_blocklen=512 00:09:46.084 18:24:31 bdev_raid -- bdev/bdev_raid.sh@856 -- # uname -s 00:09:46.084 18:24:31 bdev_raid -- bdev/bdev_raid.sh@856 -- # '[' Linux = Linux ']' 00:09:46.084 18:24:31 bdev_raid -- bdev/bdev_raid.sh@856 -- # modprobe -n nbd 00:09:46.084 18:24:31 bdev_raid -- bdev/bdev_raid.sh@857 -- # has_nbd=true 00:09:46.084 18:24:31 bdev_raid -- bdev/bdev_raid.sh@858 -- # modprobe nbd 00:09:46.084 18:24:31 bdev_raid -- bdev/bdev_raid.sh@859 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:09:46.084 18:24:31 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:46.084 18:24:31 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:46.084 18:24:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:46.084 ************************************ 00:09:46.084 START TEST raid_function_test_raid0 00:09:46.084 ************************************ 00:09:46.085 18:24:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1123 -- # raid_function_test raid0 00:09:46.085 18:24:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@80 -- # local raid_level=raid0 00:09:46.085 18:24:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:09:46.085 18:24:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:09:46.085 18:24:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # raid_pid=2753570 00:09:46.085 18:24:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 2753570' 00:09:46.085 Process raid pid: 2753570 00:09:46.085 18:24:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@87 -- # waitforlisten 2753570 /var/tmp/spdk-raid.sock 00:09:46.085 18:24:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@829 -- # '[' -z 2753570 ']' 00:09:46.085 18:24:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:46.085 18:24:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:46.085 18:24:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:46.085 18:24:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:46.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:46.085 18:24:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:46.085 18:24:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:46.085 [2024-07-15 18:24:31.545080] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:09:46.085 [2024-07-15 18:24:31.545140] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.342 [2024-07-15 18:24:31.645471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.342 [2024-07-15 18:24:31.741117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.342 [2024-07-15 18:24:31.807714] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.342 [2024-07-15 18:24:31.807746] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.274 18:24:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:47.274 18:24:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # return 0 00:09:47.274 18:24:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev raid0 00:09:47.274 18:24:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_level=raid0 00:09:47.274 18:24:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@67 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:09:47.274 18:24:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # cat 00:09:47.274 18:24:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:09:47.274 [2024-07-15 18:24:32.774282] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:47.274 [2024-07-15 18:24:32.775742] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:47.274 [2024-07-15 18:24:32.775800] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xe74700 00:09:47.274 [2024-07-15 18:24:32.775809] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:47.274 [2024-07-15 18:24:32.776018] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xcd7960 00:09:47.274 [2024-07-15 18:24:32.776143] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe74700 00:09:47.274 [2024-07-15 18:24:32.776152] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0xe74700 00:09:47.274 [2024-07-15 18:24:32.776257] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.274 Base_1 00:09:47.274 Base_2 00:09:47.274 18:24:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@76 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:09:47.274 18:24:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:09:47.274 18:24:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:09:47.533 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:09:47.533 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:09:47.533 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:09:47.533 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:47.533 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:09:47.533 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:47.533 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:47.533 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:47.533 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:09:47.533 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:47.533 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:47.533 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:09:48.145 [2024-07-15 18:24:33.528338] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xcb80b0 00:09:48.145 /dev/nbd0 00:09:48.145 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:48.145 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:48.145 18:24:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:48.145 18:24:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@867 -- # local i 00:09:48.145 18:24:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:48.145 18:24:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:48.145 18:24:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:48.145 18:24:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # break 00:09:48.145 18:24:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:48.145 18:24:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:48.145 18:24:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:48.145 1+0 records in 00:09:48.145 1+0 records out 00:09:48.145 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198529 s, 20.6 MB/s 00:09:48.145 18:24:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:48.145 18:24:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # size=4096 00:09:48.145 18:24:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:48.145 18:24:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:48.145 18:24:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # return 0 00:09:48.145 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:48.145 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:48.145 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:09:48.145 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:48.145 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:09:48.404 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:48.404 { 00:09:48.404 "nbd_device": "/dev/nbd0", 00:09:48.404 "bdev_name": "raid" 00:09:48.404 } 00:09:48.404 ]' 00:09:48.404 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:48.404 { 00:09:48.404 "nbd_device": "/dev/nbd0", 00:09:48.404 "bdev_name": "raid" 00:09:48.404 } 00:09:48.404 ]' 00:09:48.404 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:48.404 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:09:48.404 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:09:48.404 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:48.404 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:09:48.404 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:09:48.404 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # count=1 00:09:48.404 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:09:48.404 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:09:48.404 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:09:48.404 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:09:48.404 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:48.404 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local blksize 00:09:48.404 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:09:48.404 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:09:48.404 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:09:48.404 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # blksize=512 00:09:48.404 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:09:48.404 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:09:48.404 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:09:48.404 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:09:48.404 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:09:48.404 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:09:48.404 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:09:48.404 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:09:48.404 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:09:48.404 4096+0 records in 00:09:48.404 4096+0 records out 00:09:48.404 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0231083 s, 90.8 MB/s 00:09:48.404 18:24:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:09:48.664 4096+0 records in 00:09:48.664 4096+0 records out 00:09:48.664 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.229243 s, 9.1 MB/s 00:09:48.664 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:09:48.664 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:48.664 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:09:48.664 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:48.664 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:09:48.664 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:09:48.664 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:09:48.664 128+0 records in 00:09:48.664 128+0 records out 00:09:48.664 65536 bytes (66 kB, 64 KiB) copied, 0.000368248 s, 178 MB/s 00:09:48.664 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:09:48.664 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:48.664 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:48.664 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:48.664 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:48.664 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:09:48.664 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:09:48.664 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:09:48.664 2035+0 records in 00:09:48.664 2035+0 records out 00:09:48.664 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00486905 s, 214 MB/s 00:09:48.664 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:09:48.921 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:48.921 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:48.921 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:48.921 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:48.921 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:09:48.921 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:09:48.921 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:09:48.921 456+0 records in 00:09:48.921 456+0 records out 00:09:48.921 233472 bytes (233 kB, 228 KiB) copied, 0.0011314 s, 206 MB/s 00:09:48.921 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:09:48.921 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:48.921 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:48.921 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:48.921 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:48.921 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@54 -- # return 0 00:09:48.921 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:09:48.921 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:48.921 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:48.921 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:48.921 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:09:48.921 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:48.921 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:09:49.178 [2024-07-15 18:24:34.511232] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.178 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:49.178 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:49.178 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:49.178 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:49.178 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:49.178 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:49.178 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:09:49.178 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:09:49.178 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:09:49.178 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:49.178 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:09:49.437 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:49.437 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:49.437 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:49.437 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:49.437 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:49.437 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:49.437 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:09:49.437 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:09:49.437 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:49.437 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # count=0 00:09:49.437 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:09:49.437 18:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@110 -- # killprocess 2753570 00:09:49.437 18:24:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@948 -- # '[' -z 2753570 ']' 00:09:49.437 18:24:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # kill -0 2753570 00:09:49.437 18:24:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@953 -- # uname 00:09:49.437 18:24:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:49.437 18:24:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2753570 00:09:49.437 18:24:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:49.437 18:24:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:49.437 18:24:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2753570' 00:09:49.437 killing process with pid 2753570 00:09:49.437 18:24:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@967 -- # kill 2753570 00:09:49.437 [2024-07-15 18:24:34.900370] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:49.437 [2024-07-15 18:24:34.900433] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:49.437 [2024-07-15 18:24:34.900473] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:49.437 [2024-07-15 18:24:34.900482] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe74700 name raid, state offline 00:09:49.437 18:24:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # wait 2753570 00:09:49.437 [2024-07-15 18:24:34.916908] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:49.696 18:24:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@112 -- # return 0 00:09:49.696 00:09:49.696 real 0m3.621s 00:09:49.696 user 0m5.242s 00:09:49.696 sys 0m0.996s 00:09:49.696 18:24:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:49.696 18:24:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:49.696 ************************************ 00:09:49.696 END TEST raid_function_test_raid0 00:09:49.696 ************************************ 00:09:49.696 18:24:35 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:49.696 18:24:35 bdev_raid -- bdev/bdev_raid.sh@860 -- # run_test raid_function_test_concat raid_function_test concat 00:09:49.696 18:24:35 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:49.696 18:24:35 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:49.696 18:24:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:49.696 ************************************ 00:09:49.696 START TEST raid_function_test_concat 00:09:49.696 ************************************ 00:09:49.696 18:24:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1123 -- # raid_function_test concat 00:09:49.696 18:24:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@80 -- # local raid_level=concat 00:09:49.696 18:24:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:09:49.696 18:24:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:09:49.696 18:24:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # raid_pid=2754289 00:09:49.696 18:24:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 2754289' 00:09:49.696 Process raid pid: 2754289 00:09:49.696 18:24:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:49.697 18:24:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@87 -- # waitforlisten 2754289 /var/tmp/spdk-raid.sock 00:09:49.697 18:24:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@829 -- # '[' -z 2754289 ']' 00:09:49.697 18:24:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:49.697 18:24:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:49.697 18:24:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:49.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:49.697 18:24:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:49.697 18:24:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:49.697 [2024-07-15 18:24:35.210923] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:09:49.697 [2024-07-15 18:24:35.210993] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.954 [2024-07-15 18:24:35.311210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.954 [2024-07-15 18:24:35.401505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.954 [2024-07-15 18:24:35.462857] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.954 [2024-07-15 18:24:35.462890] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.889 18:24:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:50.889 18:24:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # return 0 00:09:50.889 18:24:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev concat 00:09:50.889 18:24:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_level=concat 00:09:50.889 18:24:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@67 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:09:50.889 18:24:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # cat 00:09:50.889 18:24:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:09:50.889 [2024-07-15 18:24:36.344757] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:50.889 [2024-07-15 18:24:36.346226] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:50.889 [2024-07-15 18:24:36.346286] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d05700 00:09:50.889 [2024-07-15 18:24:36.346295] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:50.889 [2024-07-15 18:24:36.346488] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b68960 00:09:50.889 [2024-07-15 18:24:36.346614] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d05700 00:09:50.889 [2024-07-15 18:24:36.346622] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x1d05700 00:09:50.889 [2024-07-15 18:24:36.346729] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.889 Base_1 00:09:50.889 Base_2 00:09:50.889 18:24:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@76 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:09:50.889 18:24:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:09:50.889 18:24:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:09:51.147 18:24:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:09:51.147 18:24:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:09:51.147 18:24:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:09:51.147 18:24:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:51.147 18:24:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:09:51.147 18:24:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:51.147 18:24:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:51.147 18:24:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:51.147 18:24:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:09:51.147 18:24:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:51.147 18:24:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:51.147 18:24:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:09:51.713 [2024-07-15 18:24:37.002556] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b491b0 00:09:51.713 /dev/nbd0 00:09:51.713 18:24:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:51.713 18:24:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:51.713 18:24:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:51.713 18:24:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@867 -- # local i 00:09:51.713 18:24:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:51.713 18:24:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:51.713 18:24:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:51.713 18:24:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # break 00:09:51.713 18:24:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:51.713 18:24:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:51.713 18:24:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:51.713 1+0 records in 00:09:51.713 1+0 records out 00:09:51.713 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229159 s, 17.9 MB/s 00:09:51.713 18:24:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:51.713 18:24:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # size=4096 00:09:51.713 18:24:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:51.713 18:24:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:51.713 18:24:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # return 0 00:09:51.713 18:24:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:51.713 18:24:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:51.713 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:09:51.713 18:24:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:51.713 18:24:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:09:51.974 18:24:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:51.974 { 00:09:51.974 "nbd_device": "/dev/nbd0", 00:09:51.974 "bdev_name": "raid" 00:09:51.974 } 00:09:51.974 ]' 00:09:51.974 18:24:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:51.974 { 00:09:51.974 "nbd_device": "/dev/nbd0", 00:09:51.974 "bdev_name": "raid" 00:09:51.974 } 00:09:51.974 ]' 00:09:51.974 18:24:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:51.974 18:24:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:09:51.974 18:24:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:09:51.974 18:24:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:51.974 18:24:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:09:51.974 18:24:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:09:51.974 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # count=1 00:09:51.974 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:09:51.974 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:09:51.974 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:09:51.974 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:09:51.974 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:51.974 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local blksize 00:09:51.974 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:09:51.974 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:09:51.974 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:09:51.974 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # blksize=512 00:09:51.974 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:09:51.974 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:09:51.974 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:09:51.974 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:09:51.974 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:09:51.974 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:09:51.974 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:09:51.974 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:09:51.974 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:09:51.974 4096+0 records in 00:09:51.974 4096+0 records out 00:09:51.974 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0244078 s, 85.9 MB/s 00:09:51.974 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:09:52.234 4096+0 records in 00:09:52.234 4096+0 records out 00:09:52.234 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.24045 s, 8.7 MB/s 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:09:52.234 128+0 records in 00:09:52.234 128+0 records out 00:09:52.234 65536 bytes (66 kB, 64 KiB) copied, 0.000183168 s, 358 MB/s 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:09:52.234 2035+0 records in 00:09:52.234 2035+0 records out 00:09:52.234 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00478959 s, 218 MB/s 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:09:52.234 456+0 records in 00:09:52.234 456+0 records out 00:09:52.234 233472 bytes (233 kB, 228 KiB) copied, 0.00125009 s, 187 MB/s 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@54 -- # return 0 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:52.234 18:24:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:09:52.494 [2024-07-15 18:24:38.004648] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.494 18:24:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:52.494 18:24:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:52.494 18:24:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:52.494 18:24:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:52.494 18:24:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:52.494 18:24:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:52.494 18:24:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:09:52.494 18:24:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:09:52.494 18:24:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:09:52.494 18:24:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:52.494 18:24:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:09:52.753 18:24:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:52.753 18:24:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:52.753 18:24:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:52.753 18:24:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:52.753 18:24:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:52.753 18:24:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:52.753 18:24:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:09:52.753 18:24:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:09:52.753 18:24:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:52.753 18:24:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # count=0 00:09:52.753 18:24:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:09:52.753 18:24:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@110 -- # killprocess 2754289 00:09:52.753 18:24:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@948 -- # '[' -z 2754289 ']' 00:09:52.753 18:24:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # kill -0 2754289 00:09:52.753 18:24:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@953 -- # uname 00:09:52.753 18:24:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:52.753 18:24:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2754289 00:09:52.753 18:24:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:52.753 18:24:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:52.753 18:24:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2754289' 00:09:52.753 killing process with pid 2754289 00:09:52.753 18:24:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@967 -- # kill 2754289 00:09:52.753 [2024-07-15 18:24:38.285581] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:52.753 [2024-07-15 18:24:38.285652] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:52.753 [2024-07-15 18:24:38.285692] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:52.753 [2024-07-15 18:24:38.285701] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d05700 name raid, state offline 00:09:52.753 18:24:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # wait 2754289 00:09:52.753 [2024-07-15 18:24:38.302207] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:53.013 18:24:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@112 -- # return 0 00:09:53.013 00:09:53.013 real 0m3.349s 00:09:53.013 user 0m4.710s 00:09:53.013 sys 0m0.991s 00:09:53.013 18:24:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:53.013 18:24:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:53.013 ************************************ 00:09:53.013 END TEST raid_function_test_concat 00:09:53.013 ************************************ 00:09:53.013 18:24:38 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:53.013 18:24:38 bdev_raid -- bdev/bdev_raid.sh@863 -- # run_test raid0_resize_test raid0_resize_test 00:09:53.013 18:24:38 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:53.013 18:24:38 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:53.013 18:24:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:53.013 ************************************ 00:09:53.013 START TEST raid0_resize_test 00:09:53.013 ************************************ 00:09:53.013 18:24:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1123 -- # raid0_resize_test 00:09:53.013 18:24:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local blksize=512 00:09:53.013 18:24:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local bdev_size_mb=32 00:09:53.013 18:24:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local new_bdev_size_mb=64 00:09:53.013 18:24:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local blkcnt 00:09:53.013 18:24:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local raid_size_mb 00:09:53.013 18:24:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local new_raid_size_mb 00:09:53.013 18:24:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # raid_pid=2754905 00:09:53.013 18:24:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # echo 'Process raid pid: 2754905' 00:09:53.013 Process raid pid: 2754905 00:09:53.013 18:24:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:53.013 18:24:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # waitforlisten 2754905 /var/tmp/spdk-raid.sock 00:09:53.013 18:24:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@829 -- # '[' -z 2754905 ']' 00:09:53.013 18:24:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:53.013 18:24:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:53.013 18:24:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:53.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:53.013 18:24:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:53.013 18:24:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.271 [2024-07-15 18:24:38.600415] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:09:53.271 [2024-07-15 18:24:38.600474] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.271 [2024-07-15 18:24:38.701178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.271 [2024-07-15 18:24:38.795784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.529 [2024-07-15 18:24:38.859323] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.529 [2024-07-15 18:24:38.859354] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.463 18:24:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:54.463 18:24:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # return 0 00:09:54.463 18:24:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:09:54.721 Base_1 00:09:54.722 18:24:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:09:54.980 Base_2 00:09:54.980 18:24:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:09:54.980 [2024-07-15 18:24:40.523844] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:54.980 [2024-07-15 18:24:40.525390] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:54.980 [2024-07-15 18:24:40.525441] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2130e80 00:09:54.980 [2024-07-15 18:24:40.525449] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:54.980 [2024-07-15 18:24:40.525657] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22e27b0 00:09:54.980 [2024-07-15 18:24:40.525758] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2130e80 00:09:54.980 [2024-07-15 18:24:40.525766] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x2130e80 00:09:54.980 [2024-07-15 18:24:40.525874] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.238 18:24:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:09:55.238 [2024-07-15 18:24:40.776507] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:55.238 [2024-07-15 18:24:40.776529] bdev_raid.c:2275:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:09:55.238 true 00:09:55.495 18:24:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:09:55.495 18:24:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # jq '.[].num_blocks' 00:09:55.495 [2024-07-15 18:24:41.021315] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:55.495 18:24:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # blkcnt=131072 00:09:55.495 18:24:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # raid_size_mb=64 00:09:55.495 18:24:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # '[' 64 '!=' 64 ']' 00:09:55.495 18:24:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:09:55.753 [2024-07-15 18:24:41.273816] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:55.753 [2024-07-15 18:24:41.273836] bdev_raid.c:2275:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:09:55.753 [2024-07-15 18:24:41.273857] bdev_raid.c:2289:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:09:55.753 true 00:09:55.753 18:24:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:09:55.753 18:24:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # jq '.[].num_blocks' 00:09:56.010 [2024-07-15 18:24:41.526658] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:56.010 18:24:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # blkcnt=262144 00:09:56.010 18:24:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # raid_size_mb=128 00:09:56.010 18:24:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 128 '!=' 128 ']' 00:09:56.010 18:24:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@386 -- # killprocess 2754905 00:09:56.010 18:24:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@948 -- # '[' -z 2754905 ']' 00:09:56.010 18:24:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # kill -0 2754905 00:09:56.010 18:24:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # uname 00:09:56.010 18:24:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:56.010 18:24:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2754905 00:09:56.268 18:24:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:56.268 18:24:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:56.268 18:24:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2754905' 00:09:56.268 killing process with pid 2754905 00:09:56.268 18:24:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@967 -- # kill 2754905 00:09:56.268 [2024-07-15 18:24:41.591285] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:56.268 [2024-07-15 18:24:41.591338] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:56.269 [2024-07-15 18:24:41.591378] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:56.269 [2024-07-15 18:24:41.591387] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2130e80 name Raid, state offline 00:09:56.269 18:24:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # wait 2754905 00:09:56.269 [2024-07-15 18:24:41.592629] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:56.269 18:24:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@388 -- # return 0 00:09:56.269 00:09:56.269 real 0m3.230s 00:09:56.269 user 0m5.235s 00:09:56.269 sys 0m0.535s 00:09:56.269 18:24:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:56.269 18:24:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.269 ************************************ 00:09:56.269 END TEST raid0_resize_test 00:09:56.269 ************************************ 00:09:56.269 18:24:41 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:56.269 18:24:41 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:09:56.269 18:24:41 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:09:56.269 18:24:41 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:09:56.269 18:24:41 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:56.269 18:24:41 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.269 18:24:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:56.269 ************************************ 00:09:56.269 START TEST raid_state_function_test 00:09:56.269 ************************************ 00:09:56.269 18:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 false 00:09:56.269 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:09:56.269 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:09:56.269 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:09:56.269 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:56.269 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:56.269 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:56.269 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:56.269 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:56.269 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:56.269 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:56.269 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:56.269 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:56.269 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:56.269 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:56.269 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:56.269 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:56.527 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:56.527 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:56.527 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:09:56.527 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:09:56.527 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:09:56.527 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:09:56.527 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:09:56.527 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2755468 00:09:56.527 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2755468' 00:09:56.527 Process raid pid: 2755468 00:09:56.527 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:56.527 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2755468 /var/tmp/spdk-raid.sock 00:09:56.527 18:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2755468 ']' 00:09:56.527 18:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:56.527 18:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:56.527 18:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:56.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:56.527 18:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:56.527 18:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.527 [2024-07-15 18:24:41.879453] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:09:56.527 [2024-07-15 18:24:41.879514] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.527 [2024-07-15 18:24:41.977234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.527 [2024-07-15 18:24:42.071518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.785 [2024-07-15 18:24:42.128727] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.785 [2024-07-15 18:24:42.128774] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.720 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:57.720 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:09:57.720 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:57.978 [2024-07-15 18:24:43.312296] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:57.978 [2024-07-15 18:24:43.312334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:57.978 [2024-07-15 18:24:43.312344] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:57.978 [2024-07-15 18:24:43.312352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:57.978 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:57.978 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:57.978 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:57.978 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:57.978 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:57.978 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:57.978 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:57.978 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:57.978 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:57.978 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:57.978 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:57.978 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.237 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:58.237 "name": "Existed_Raid", 00:09:58.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.237 "strip_size_kb": 64, 00:09:58.237 "state": "configuring", 00:09:58.237 "raid_level": "raid0", 00:09:58.237 "superblock": false, 00:09:58.237 "num_base_bdevs": 2, 00:09:58.237 "num_base_bdevs_discovered": 0, 00:09:58.237 "num_base_bdevs_operational": 2, 00:09:58.237 "base_bdevs_list": [ 00:09:58.237 { 00:09:58.237 "name": "BaseBdev1", 00:09:58.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.237 "is_configured": false, 00:09:58.237 "data_offset": 0, 00:09:58.237 "data_size": 0 00:09:58.237 }, 00:09:58.237 { 00:09:58.237 "name": "BaseBdev2", 00:09:58.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.237 "is_configured": false, 00:09:58.237 "data_offset": 0, 00:09:58.237 "data_size": 0 00:09:58.237 } 00:09:58.237 ] 00:09:58.237 }' 00:09:58.237 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:58.237 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.804 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:59.062 [2024-07-15 18:24:44.431156] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:59.062 [2024-07-15 18:24:44.431189] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a58b80 name Existed_Raid, state configuring 00:09:59.062 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:59.630 [2024-07-15 18:24:44.916458] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:59.630 [2024-07-15 18:24:44.916489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:59.630 [2024-07-15 18:24:44.916497] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:59.630 [2024-07-15 18:24:44.916506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:59.630 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:59.889 [2024-07-15 18:24:45.194557] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.889 BaseBdev1 00:09:59.889 18:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:09:59.889 18:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:09:59.889 18:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:59.889 18:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:59.889 18:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:59.889 18:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:59.889 18:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:00.457 18:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:00.457 [ 00:10:00.457 { 00:10:00.457 "name": "BaseBdev1", 00:10:00.457 "aliases": [ 00:10:00.457 "91fbac8e-59b3-46bf-874d-053dabd724c9" 00:10:00.457 ], 00:10:00.457 "product_name": "Malloc disk", 00:10:00.457 "block_size": 512, 00:10:00.457 "num_blocks": 65536, 00:10:00.457 "uuid": "91fbac8e-59b3-46bf-874d-053dabd724c9", 00:10:00.457 "assigned_rate_limits": { 00:10:00.457 "rw_ios_per_sec": 0, 00:10:00.457 "rw_mbytes_per_sec": 0, 00:10:00.457 "r_mbytes_per_sec": 0, 00:10:00.457 "w_mbytes_per_sec": 0 00:10:00.457 }, 00:10:00.457 "claimed": true, 00:10:00.457 "claim_type": "exclusive_write", 00:10:00.457 "zoned": false, 00:10:00.457 "supported_io_types": { 00:10:00.457 "read": true, 00:10:00.457 "write": true, 00:10:00.457 "unmap": true, 00:10:00.457 "flush": true, 00:10:00.457 "reset": true, 00:10:00.457 "nvme_admin": false, 00:10:00.457 "nvme_io": false, 00:10:00.457 "nvme_io_md": false, 00:10:00.457 "write_zeroes": true, 00:10:00.457 "zcopy": true, 00:10:00.457 "get_zone_info": false, 00:10:00.457 "zone_management": false, 00:10:00.457 "zone_append": false, 00:10:00.457 "compare": false, 00:10:00.457 "compare_and_write": false, 00:10:00.457 "abort": true, 00:10:00.457 "seek_hole": false, 00:10:00.457 "seek_data": false, 00:10:00.457 "copy": true, 00:10:00.457 "nvme_iov_md": false 00:10:00.457 }, 00:10:00.457 "memory_domains": [ 00:10:00.457 { 00:10:00.457 "dma_device_id": "system", 00:10:00.457 "dma_device_type": 1 00:10:00.457 }, 00:10:00.457 { 00:10:00.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.457 "dma_device_type": 2 00:10:00.457 } 00:10:00.457 ], 00:10:00.457 "driver_specific": {} 00:10:00.457 } 00:10:00.457 ] 00:10:00.457 18:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:00.457 18:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:00.457 18:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:00.457 18:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:00.457 18:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:00.457 18:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:00.457 18:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:00.457 18:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:00.457 18:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:00.457 18:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:00.457 18:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:00.457 18:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:00.457 18:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.716 18:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:00.716 "name": "Existed_Raid", 00:10:00.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.716 "strip_size_kb": 64, 00:10:00.716 "state": "configuring", 00:10:00.716 "raid_level": "raid0", 00:10:00.716 "superblock": false, 00:10:00.716 "num_base_bdevs": 2, 00:10:00.716 "num_base_bdevs_discovered": 1, 00:10:00.716 "num_base_bdevs_operational": 2, 00:10:00.716 "base_bdevs_list": [ 00:10:00.716 { 00:10:00.716 "name": "BaseBdev1", 00:10:00.716 "uuid": "91fbac8e-59b3-46bf-874d-053dabd724c9", 00:10:00.716 "is_configured": true, 00:10:00.716 "data_offset": 0, 00:10:00.716 "data_size": 65536 00:10:00.716 }, 00:10:00.716 { 00:10:00.716 "name": "BaseBdev2", 00:10:00.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.716 "is_configured": false, 00:10:00.716 "data_offset": 0, 00:10:00.716 "data_size": 0 00:10:00.716 } 00:10:00.716 ] 00:10:00.716 }' 00:10:00.716 18:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:00.716 18:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.286 18:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:01.544 [2024-07-15 18:24:47.067827] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:01.544 [2024-07-15 18:24:47.067864] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a58470 name Existed_Raid, state configuring 00:10:01.544 18:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:02.113 [2024-07-15 18:24:47.557172] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:02.113 [2024-07-15 18:24:47.558683] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:02.113 [2024-07-15 18:24:47.558714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:02.113 18:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:02.113 18:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:02.113 18:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:02.113 18:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:02.113 18:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:02.113 18:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:02.113 18:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:02.113 18:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:02.113 18:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:02.113 18:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:02.113 18:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:02.113 18:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:02.113 18:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:02.113 18:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.372 18:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:02.372 "name": "Existed_Raid", 00:10:02.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.372 "strip_size_kb": 64, 00:10:02.372 "state": "configuring", 00:10:02.372 "raid_level": "raid0", 00:10:02.372 "superblock": false, 00:10:02.372 "num_base_bdevs": 2, 00:10:02.372 "num_base_bdevs_discovered": 1, 00:10:02.372 "num_base_bdevs_operational": 2, 00:10:02.372 "base_bdevs_list": [ 00:10:02.372 { 00:10:02.372 "name": "BaseBdev1", 00:10:02.372 "uuid": "91fbac8e-59b3-46bf-874d-053dabd724c9", 00:10:02.372 "is_configured": true, 00:10:02.372 "data_offset": 0, 00:10:02.372 "data_size": 65536 00:10:02.372 }, 00:10:02.372 { 00:10:02.372 "name": "BaseBdev2", 00:10:02.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.372 "is_configured": false, 00:10:02.372 "data_offset": 0, 00:10:02.372 "data_size": 0 00:10:02.372 } 00:10:02.372 ] 00:10:02.372 }' 00:10:02.372 18:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:02.372 18:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.940 18:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:03.199 [2024-07-15 18:24:48.703568] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:03.199 [2024-07-15 18:24:48.703602] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1a59260 00:10:03.199 [2024-07-15 18:24:48.703608] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:03.199 [2024-07-15 18:24:48.703801] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c023d0 00:10:03.199 [2024-07-15 18:24:48.703919] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1a59260 00:10:03.199 [2024-07-15 18:24:48.703928] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1a59260 00:10:03.199 [2024-07-15 18:24:48.704106] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:03.199 BaseBdev2 00:10:03.199 18:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:03.199 18:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:03.199 18:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:03.199 18:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:03.199 18:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:03.199 18:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:03.199 18:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:03.767 18:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:04.025 [ 00:10:04.025 { 00:10:04.025 "name": "BaseBdev2", 00:10:04.025 "aliases": [ 00:10:04.025 "bdcb38b5-afc7-447e-9e86-86d391de9505" 00:10:04.025 ], 00:10:04.025 "product_name": "Malloc disk", 00:10:04.025 "block_size": 512, 00:10:04.025 "num_blocks": 65536, 00:10:04.025 "uuid": "bdcb38b5-afc7-447e-9e86-86d391de9505", 00:10:04.025 "assigned_rate_limits": { 00:10:04.025 "rw_ios_per_sec": 0, 00:10:04.025 "rw_mbytes_per_sec": 0, 00:10:04.025 "r_mbytes_per_sec": 0, 00:10:04.025 "w_mbytes_per_sec": 0 00:10:04.025 }, 00:10:04.025 "claimed": true, 00:10:04.025 "claim_type": "exclusive_write", 00:10:04.025 "zoned": false, 00:10:04.025 "supported_io_types": { 00:10:04.025 "read": true, 00:10:04.025 "write": true, 00:10:04.025 "unmap": true, 00:10:04.025 "flush": true, 00:10:04.025 "reset": true, 00:10:04.025 "nvme_admin": false, 00:10:04.025 "nvme_io": false, 00:10:04.025 "nvme_io_md": false, 00:10:04.025 "write_zeroes": true, 00:10:04.025 "zcopy": true, 00:10:04.025 "get_zone_info": false, 00:10:04.025 "zone_management": false, 00:10:04.025 "zone_append": false, 00:10:04.025 "compare": false, 00:10:04.025 "compare_and_write": false, 00:10:04.025 "abort": true, 00:10:04.025 "seek_hole": false, 00:10:04.025 "seek_data": false, 00:10:04.025 "copy": true, 00:10:04.025 "nvme_iov_md": false 00:10:04.025 }, 00:10:04.025 "memory_domains": [ 00:10:04.026 { 00:10:04.026 "dma_device_id": "system", 00:10:04.026 "dma_device_type": 1 00:10:04.026 }, 00:10:04.026 { 00:10:04.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.026 "dma_device_type": 2 00:10:04.026 } 00:10:04.026 ], 00:10:04.026 "driver_specific": {} 00:10:04.026 } 00:10:04.026 ] 00:10:04.026 18:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:04.026 18:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:04.026 18:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:04.026 18:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:10:04.026 18:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:04.026 18:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:04.026 18:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:04.026 18:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:04.026 18:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:04.026 18:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:04.026 18:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:04.026 18:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:04.026 18:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:04.026 18:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:04.026 18:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.319 18:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:04.319 "name": "Existed_Raid", 00:10:04.319 "uuid": "3d7ae051-10ff-4873-b12c-47f5ed9141e6", 00:10:04.319 "strip_size_kb": 64, 00:10:04.319 "state": "online", 00:10:04.319 "raid_level": "raid0", 00:10:04.319 "superblock": false, 00:10:04.319 "num_base_bdevs": 2, 00:10:04.319 "num_base_bdevs_discovered": 2, 00:10:04.319 "num_base_bdevs_operational": 2, 00:10:04.319 "base_bdevs_list": [ 00:10:04.319 { 00:10:04.319 "name": "BaseBdev1", 00:10:04.319 "uuid": "91fbac8e-59b3-46bf-874d-053dabd724c9", 00:10:04.319 "is_configured": true, 00:10:04.319 "data_offset": 0, 00:10:04.319 "data_size": 65536 00:10:04.319 }, 00:10:04.319 { 00:10:04.319 "name": "BaseBdev2", 00:10:04.319 "uuid": "bdcb38b5-afc7-447e-9e86-86d391de9505", 00:10:04.319 "is_configured": true, 00:10:04.319 "data_offset": 0, 00:10:04.319 "data_size": 65536 00:10:04.319 } 00:10:04.319 ] 00:10:04.319 }' 00:10:04.319 18:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:04.319 18:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.886 18:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:04.886 18:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:04.886 18:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:04.886 18:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:04.886 18:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:04.886 18:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:04.886 18:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:04.886 18:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:05.150 [2024-07-15 18:24:50.532787] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:05.150 18:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:05.150 "name": "Existed_Raid", 00:10:05.150 "aliases": [ 00:10:05.150 "3d7ae051-10ff-4873-b12c-47f5ed9141e6" 00:10:05.150 ], 00:10:05.150 "product_name": "Raid Volume", 00:10:05.150 "block_size": 512, 00:10:05.150 "num_blocks": 131072, 00:10:05.150 "uuid": "3d7ae051-10ff-4873-b12c-47f5ed9141e6", 00:10:05.150 "assigned_rate_limits": { 00:10:05.150 "rw_ios_per_sec": 0, 00:10:05.150 "rw_mbytes_per_sec": 0, 00:10:05.150 "r_mbytes_per_sec": 0, 00:10:05.150 "w_mbytes_per_sec": 0 00:10:05.150 }, 00:10:05.150 "claimed": false, 00:10:05.150 "zoned": false, 00:10:05.150 "supported_io_types": { 00:10:05.150 "read": true, 00:10:05.150 "write": true, 00:10:05.150 "unmap": true, 00:10:05.150 "flush": true, 00:10:05.150 "reset": true, 00:10:05.150 "nvme_admin": false, 00:10:05.150 "nvme_io": false, 00:10:05.150 "nvme_io_md": false, 00:10:05.150 "write_zeroes": true, 00:10:05.150 "zcopy": false, 00:10:05.150 "get_zone_info": false, 00:10:05.150 "zone_management": false, 00:10:05.150 "zone_append": false, 00:10:05.150 "compare": false, 00:10:05.150 "compare_and_write": false, 00:10:05.150 "abort": false, 00:10:05.150 "seek_hole": false, 00:10:05.150 "seek_data": false, 00:10:05.150 "copy": false, 00:10:05.150 "nvme_iov_md": false 00:10:05.150 }, 00:10:05.150 "memory_domains": [ 00:10:05.150 { 00:10:05.150 "dma_device_id": "system", 00:10:05.150 "dma_device_type": 1 00:10:05.150 }, 00:10:05.150 { 00:10:05.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.150 "dma_device_type": 2 00:10:05.150 }, 00:10:05.150 { 00:10:05.150 "dma_device_id": "system", 00:10:05.150 "dma_device_type": 1 00:10:05.150 }, 00:10:05.150 { 00:10:05.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.150 "dma_device_type": 2 00:10:05.150 } 00:10:05.150 ], 00:10:05.150 "driver_specific": { 00:10:05.150 "raid": { 00:10:05.150 "uuid": "3d7ae051-10ff-4873-b12c-47f5ed9141e6", 00:10:05.150 "strip_size_kb": 64, 00:10:05.150 "state": "online", 00:10:05.150 "raid_level": "raid0", 00:10:05.150 "superblock": false, 00:10:05.150 "num_base_bdevs": 2, 00:10:05.150 "num_base_bdevs_discovered": 2, 00:10:05.150 "num_base_bdevs_operational": 2, 00:10:05.150 "base_bdevs_list": [ 00:10:05.150 { 00:10:05.150 "name": "BaseBdev1", 00:10:05.150 "uuid": "91fbac8e-59b3-46bf-874d-053dabd724c9", 00:10:05.150 "is_configured": true, 00:10:05.150 "data_offset": 0, 00:10:05.150 "data_size": 65536 00:10:05.150 }, 00:10:05.150 { 00:10:05.150 "name": "BaseBdev2", 00:10:05.150 "uuid": "bdcb38b5-afc7-447e-9e86-86d391de9505", 00:10:05.150 "is_configured": true, 00:10:05.150 "data_offset": 0, 00:10:05.150 "data_size": 65536 00:10:05.150 } 00:10:05.150 ] 00:10:05.150 } 00:10:05.151 } 00:10:05.151 }' 00:10:05.151 18:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:05.151 18:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:05.151 BaseBdev2' 00:10:05.151 18:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:05.151 18:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:05.151 18:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:05.410 18:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:05.410 "name": "BaseBdev1", 00:10:05.410 "aliases": [ 00:10:05.410 "91fbac8e-59b3-46bf-874d-053dabd724c9" 00:10:05.410 ], 00:10:05.410 "product_name": "Malloc disk", 00:10:05.410 "block_size": 512, 00:10:05.410 "num_blocks": 65536, 00:10:05.410 "uuid": "91fbac8e-59b3-46bf-874d-053dabd724c9", 00:10:05.410 "assigned_rate_limits": { 00:10:05.410 "rw_ios_per_sec": 0, 00:10:05.410 "rw_mbytes_per_sec": 0, 00:10:05.410 "r_mbytes_per_sec": 0, 00:10:05.410 "w_mbytes_per_sec": 0 00:10:05.410 }, 00:10:05.410 "claimed": true, 00:10:05.410 "claim_type": "exclusive_write", 00:10:05.410 "zoned": false, 00:10:05.410 "supported_io_types": { 00:10:05.410 "read": true, 00:10:05.410 "write": true, 00:10:05.410 "unmap": true, 00:10:05.410 "flush": true, 00:10:05.410 "reset": true, 00:10:05.410 "nvme_admin": false, 00:10:05.410 "nvme_io": false, 00:10:05.410 "nvme_io_md": false, 00:10:05.410 "write_zeroes": true, 00:10:05.410 "zcopy": true, 00:10:05.410 "get_zone_info": false, 00:10:05.410 "zone_management": false, 00:10:05.410 "zone_append": false, 00:10:05.410 "compare": false, 00:10:05.410 "compare_and_write": false, 00:10:05.410 "abort": true, 00:10:05.410 "seek_hole": false, 00:10:05.410 "seek_data": false, 00:10:05.410 "copy": true, 00:10:05.410 "nvme_iov_md": false 00:10:05.410 }, 00:10:05.410 "memory_domains": [ 00:10:05.410 { 00:10:05.410 "dma_device_id": "system", 00:10:05.410 "dma_device_type": 1 00:10:05.410 }, 00:10:05.410 { 00:10:05.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.410 "dma_device_type": 2 00:10:05.410 } 00:10:05.410 ], 00:10:05.410 "driver_specific": {} 00:10:05.410 }' 00:10:05.410 18:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:05.410 18:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:05.410 18:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:05.410 18:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:05.670 18:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:05.670 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:05.670 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:05.670 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:05.670 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:05.670 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:05.670 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:05.670 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:05.670 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:05.670 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:05.670 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:05.930 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:05.930 "name": "BaseBdev2", 00:10:05.930 "aliases": [ 00:10:05.930 "bdcb38b5-afc7-447e-9e86-86d391de9505" 00:10:05.930 ], 00:10:05.930 "product_name": "Malloc disk", 00:10:05.930 "block_size": 512, 00:10:05.930 "num_blocks": 65536, 00:10:05.930 "uuid": "bdcb38b5-afc7-447e-9e86-86d391de9505", 00:10:05.930 "assigned_rate_limits": { 00:10:05.930 "rw_ios_per_sec": 0, 00:10:05.930 "rw_mbytes_per_sec": 0, 00:10:05.930 "r_mbytes_per_sec": 0, 00:10:05.930 "w_mbytes_per_sec": 0 00:10:05.930 }, 00:10:05.930 "claimed": true, 00:10:05.930 "claim_type": "exclusive_write", 00:10:05.930 "zoned": false, 00:10:05.930 "supported_io_types": { 00:10:05.930 "read": true, 00:10:05.930 "write": true, 00:10:05.930 "unmap": true, 00:10:05.930 "flush": true, 00:10:05.930 "reset": true, 00:10:05.930 "nvme_admin": false, 00:10:05.930 "nvme_io": false, 00:10:05.930 "nvme_io_md": false, 00:10:05.930 "write_zeroes": true, 00:10:05.930 "zcopy": true, 00:10:05.930 "get_zone_info": false, 00:10:05.930 "zone_management": false, 00:10:05.930 "zone_append": false, 00:10:05.930 "compare": false, 00:10:05.930 "compare_and_write": false, 00:10:05.930 "abort": true, 00:10:05.930 "seek_hole": false, 00:10:05.930 "seek_data": false, 00:10:05.930 "copy": true, 00:10:05.930 "nvme_iov_md": false 00:10:05.930 }, 00:10:05.930 "memory_domains": [ 00:10:05.930 { 00:10:05.930 "dma_device_id": "system", 00:10:05.930 "dma_device_type": 1 00:10:05.930 }, 00:10:05.930 { 00:10:05.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.930 "dma_device_type": 2 00:10:05.930 } 00:10:05.930 ], 00:10:05.930 "driver_specific": {} 00:10:05.930 }' 00:10:05.930 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:05.930 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:05.930 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:05.930 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:06.189 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:06.189 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:06.189 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:06.189 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:06.189 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:06.189 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:06.189 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:06.189 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:06.189 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:06.448 [2024-07-15 18:24:51.848057] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:06.448 [2024-07-15 18:24:51.848081] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:06.448 [2024-07-15 18:24:51.848120] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:06.448 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:06.448 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:10:06.448 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:06.448 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:06.448 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:10:06.448 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:10:06.448 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:06.448 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:10:06.448 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:06.448 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:06.448 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:10:06.448 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:06.448 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:06.448 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:06.448 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:06.448 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.448 18:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:06.707 18:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:06.707 "name": "Existed_Raid", 00:10:06.707 "uuid": "3d7ae051-10ff-4873-b12c-47f5ed9141e6", 00:10:06.707 "strip_size_kb": 64, 00:10:06.707 "state": "offline", 00:10:06.707 "raid_level": "raid0", 00:10:06.707 "superblock": false, 00:10:06.707 "num_base_bdevs": 2, 00:10:06.707 "num_base_bdevs_discovered": 1, 00:10:06.707 "num_base_bdevs_operational": 1, 00:10:06.707 "base_bdevs_list": [ 00:10:06.707 { 00:10:06.707 "name": null, 00:10:06.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.707 "is_configured": false, 00:10:06.707 "data_offset": 0, 00:10:06.707 "data_size": 65536 00:10:06.707 }, 00:10:06.707 { 00:10:06.707 "name": "BaseBdev2", 00:10:06.707 "uuid": "bdcb38b5-afc7-447e-9e86-86d391de9505", 00:10:06.707 "is_configured": true, 00:10:06.707 "data_offset": 0, 00:10:06.707 "data_size": 65536 00:10:06.707 } 00:10:06.707 ] 00:10:06.707 }' 00:10:06.707 18:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:06.707 18:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.274 18:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:07.274 18:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:07.274 18:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:07.274 18:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:07.533 18:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:07.533 18:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:07.533 18:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:07.792 [2024-07-15 18:24:53.156702] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:07.792 [2024-07-15 18:24:53.156748] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a59260 name Existed_Raid, state offline 00:10:07.792 18:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:07.792 18:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:07.792 18:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:07.792 18:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:08.051 18:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:08.051 18:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:08.051 18:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:10:08.051 18:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2755468 00:10:08.051 18:24:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2755468 ']' 00:10:08.051 18:24:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2755468 00:10:08.051 18:24:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:10:08.051 18:24:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:08.051 18:24:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2755468 00:10:08.051 18:24:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:08.051 18:24:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:08.051 18:24:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2755468' 00:10:08.051 killing process with pid 2755468 00:10:08.051 18:24:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2755468 00:10:08.051 [2024-07-15 18:24:53.501511] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:08.051 18:24:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2755468 00:10:08.051 [2024-07-15 18:24:53.502379] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:10:08.310 00:10:08.310 real 0m11.882s 00:10:08.310 user 0m21.747s 00:10:08.310 sys 0m1.635s 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.310 ************************************ 00:10:08.310 END TEST raid_state_function_test 00:10:08.310 ************************************ 00:10:08.310 18:24:53 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:08.310 18:24:53 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:10:08.310 18:24:53 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:08.310 18:24:53 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:08.310 18:24:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:08.310 ************************************ 00:10:08.310 START TEST raid_state_function_test_sb 00:10:08.310 ************************************ 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 true 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2757588 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2757588' 00:10:08.310 Process raid pid: 2757588 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2757588 /var/tmp/spdk-raid.sock 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2757588 ']' 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:08.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:08.310 18:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.310 [2024-07-15 18:24:53.803365] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:10:08.310 [2024-07-15 18:24:53.803424] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.568 [2024-07-15 18:24:53.893490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.568 [2024-07-15 18:24:53.987162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.568 [2024-07-15 18:24:54.045134] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.568 [2024-07-15 18:24:54.045165] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.503 18:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:09.503 18:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:10:09.503 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:09.503 [2024-07-15 18:24:54.927920] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:09.503 [2024-07-15 18:24:54.927965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:09.503 [2024-07-15 18:24:54.927974] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:09.503 [2024-07-15 18:24:54.927983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:09.503 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:09.503 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:09.503 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:09.503 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:09.503 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:09.503 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:09.503 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:09.503 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:09.503 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:09.504 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:09.504 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:09.504 18:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.763 18:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:09.763 "name": "Existed_Raid", 00:10:09.763 "uuid": "80908c4a-313b-4165-aacb-178027724932", 00:10:09.763 "strip_size_kb": 64, 00:10:09.763 "state": "configuring", 00:10:09.763 "raid_level": "raid0", 00:10:09.763 "superblock": true, 00:10:09.763 "num_base_bdevs": 2, 00:10:09.763 "num_base_bdevs_discovered": 0, 00:10:09.763 "num_base_bdevs_operational": 2, 00:10:09.763 "base_bdevs_list": [ 00:10:09.763 { 00:10:09.763 "name": "BaseBdev1", 00:10:09.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.763 "is_configured": false, 00:10:09.763 "data_offset": 0, 00:10:09.763 "data_size": 0 00:10:09.763 }, 00:10:09.763 { 00:10:09.763 "name": "BaseBdev2", 00:10:09.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.763 "is_configured": false, 00:10:09.763 "data_offset": 0, 00:10:09.763 "data_size": 0 00:10:09.763 } 00:10:09.763 ] 00:10:09.763 }' 00:10:09.763 18:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:09.763 18:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.329 18:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:10.586 [2024-07-15 18:24:55.990627] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:10.586 [2024-07-15 18:24:55.990655] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1be1b80 name Existed_Raid, state configuring 00:10:10.586 18:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:10.844 [2024-07-15 18:24:56.155092] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:10.844 [2024-07-15 18:24:56.155114] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:10.844 [2024-07-15 18:24:56.155121] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:10.844 [2024-07-15 18:24:56.155129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:10.844 18:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:10.844 [2024-07-15 18:24:56.336959] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:10.844 BaseBdev1 00:10:10.844 18:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:10.844 18:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:10:10.844 18:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:10.844 18:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:10.844 18:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:10.844 18:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:10.844 18:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:11.102 18:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:11.360 [ 00:10:11.360 { 00:10:11.360 "name": "BaseBdev1", 00:10:11.360 "aliases": [ 00:10:11.360 "bd7916a3-3ba3-4e50-a5ea-55b7991b2130" 00:10:11.360 ], 00:10:11.360 "product_name": "Malloc disk", 00:10:11.360 "block_size": 512, 00:10:11.360 "num_blocks": 65536, 00:10:11.360 "uuid": "bd7916a3-3ba3-4e50-a5ea-55b7991b2130", 00:10:11.360 "assigned_rate_limits": { 00:10:11.360 "rw_ios_per_sec": 0, 00:10:11.360 "rw_mbytes_per_sec": 0, 00:10:11.360 "r_mbytes_per_sec": 0, 00:10:11.360 "w_mbytes_per_sec": 0 00:10:11.360 }, 00:10:11.360 "claimed": true, 00:10:11.360 "claim_type": "exclusive_write", 00:10:11.360 "zoned": false, 00:10:11.360 "supported_io_types": { 00:10:11.360 "read": true, 00:10:11.360 "write": true, 00:10:11.360 "unmap": true, 00:10:11.360 "flush": true, 00:10:11.360 "reset": true, 00:10:11.360 "nvme_admin": false, 00:10:11.360 "nvme_io": false, 00:10:11.360 "nvme_io_md": false, 00:10:11.360 "write_zeroes": true, 00:10:11.360 "zcopy": true, 00:10:11.360 "get_zone_info": false, 00:10:11.360 "zone_management": false, 00:10:11.360 "zone_append": false, 00:10:11.360 "compare": false, 00:10:11.360 "compare_and_write": false, 00:10:11.360 "abort": true, 00:10:11.360 "seek_hole": false, 00:10:11.360 "seek_data": false, 00:10:11.360 "copy": true, 00:10:11.360 "nvme_iov_md": false 00:10:11.360 }, 00:10:11.360 "memory_domains": [ 00:10:11.360 { 00:10:11.360 "dma_device_id": "system", 00:10:11.360 "dma_device_type": 1 00:10:11.360 }, 00:10:11.360 { 00:10:11.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.360 "dma_device_type": 2 00:10:11.360 } 00:10:11.360 ], 00:10:11.360 "driver_specific": {} 00:10:11.360 } 00:10:11.360 ] 00:10:11.360 18:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:11.360 18:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:11.360 18:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:11.360 18:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:11.360 18:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:11.360 18:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:11.360 18:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:11.360 18:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:11.360 18:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:11.360 18:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:11.360 18:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:11.360 18:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:11.360 18:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.360 18:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:11.360 "name": "Existed_Raid", 00:10:11.360 "uuid": "5e3a4758-a6c9-4a20-b892-087b89261b4e", 00:10:11.360 "strip_size_kb": 64, 00:10:11.360 "state": "configuring", 00:10:11.360 "raid_level": "raid0", 00:10:11.360 "superblock": true, 00:10:11.360 "num_base_bdevs": 2, 00:10:11.360 "num_base_bdevs_discovered": 1, 00:10:11.360 "num_base_bdevs_operational": 2, 00:10:11.360 "base_bdevs_list": [ 00:10:11.360 { 00:10:11.361 "name": "BaseBdev1", 00:10:11.361 "uuid": "bd7916a3-3ba3-4e50-a5ea-55b7991b2130", 00:10:11.361 "is_configured": true, 00:10:11.361 "data_offset": 2048, 00:10:11.361 "data_size": 63488 00:10:11.361 }, 00:10:11.361 { 00:10:11.361 "name": "BaseBdev2", 00:10:11.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.361 "is_configured": false, 00:10:11.361 "data_offset": 0, 00:10:11.361 "data_size": 0 00:10:11.361 } 00:10:11.361 ] 00:10:11.361 }' 00:10:11.361 18:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:11.361 18:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.294 18:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:12.294 [2024-07-15 18:24:57.728723] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:12.294 [2024-07-15 18:24:57.728759] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1be1470 name Existed_Raid, state configuring 00:10:12.294 18:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:12.552 [2024-07-15 18:24:57.989462] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:12.552 [2024-07-15 18:24:57.990976] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:12.552 [2024-07-15 18:24:57.991006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:12.552 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:12.552 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:12.552 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:12.552 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:12.552 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:12.552 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:12.552 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:12.552 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:12.552 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:12.552 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:12.552 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:12.552 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:12.552 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:12.552 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.811 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:12.811 "name": "Existed_Raid", 00:10:12.811 "uuid": "0750aef7-a7bf-4237-b2a7-6c7c5fdc0484", 00:10:12.811 "strip_size_kb": 64, 00:10:12.811 "state": "configuring", 00:10:12.811 "raid_level": "raid0", 00:10:12.811 "superblock": true, 00:10:12.811 "num_base_bdevs": 2, 00:10:12.811 "num_base_bdevs_discovered": 1, 00:10:12.811 "num_base_bdevs_operational": 2, 00:10:12.811 "base_bdevs_list": [ 00:10:12.811 { 00:10:12.811 "name": "BaseBdev1", 00:10:12.811 "uuid": "bd7916a3-3ba3-4e50-a5ea-55b7991b2130", 00:10:12.811 "is_configured": true, 00:10:12.811 "data_offset": 2048, 00:10:12.811 "data_size": 63488 00:10:12.811 }, 00:10:12.811 { 00:10:12.811 "name": "BaseBdev2", 00:10:12.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.811 "is_configured": false, 00:10:12.811 "data_offset": 0, 00:10:12.811 "data_size": 0 00:10:12.811 } 00:10:12.811 ] 00:10:12.811 }' 00:10:12.811 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:12.811 18:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.378 18:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:13.636 [2024-07-15 18:24:59.151698] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:13.636 [2024-07-15 18:24:59.151836] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1be2260 00:10:13.636 [2024-07-15 18:24:59.151848] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:13.637 [2024-07-15 18:24:59.152034] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1be13c0 00:10:13.637 [2024-07-15 18:24:59.152164] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1be2260 00:10:13.637 [2024-07-15 18:24:59.152173] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1be2260 00:10:13.637 [2024-07-15 18:24:59.152271] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.637 BaseBdev2 00:10:13.637 18:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:13.637 18:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:13.637 18:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:13.637 18:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:13.637 18:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:13.637 18:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:13.637 18:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:13.894 18:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:14.152 [ 00:10:14.152 { 00:10:14.152 "name": "BaseBdev2", 00:10:14.152 "aliases": [ 00:10:14.152 "09ec02bf-ab9e-4b65-af71-f5e437eee058" 00:10:14.152 ], 00:10:14.152 "product_name": "Malloc disk", 00:10:14.152 "block_size": 512, 00:10:14.152 "num_blocks": 65536, 00:10:14.152 "uuid": "09ec02bf-ab9e-4b65-af71-f5e437eee058", 00:10:14.152 "assigned_rate_limits": { 00:10:14.152 "rw_ios_per_sec": 0, 00:10:14.152 "rw_mbytes_per_sec": 0, 00:10:14.152 "r_mbytes_per_sec": 0, 00:10:14.152 "w_mbytes_per_sec": 0 00:10:14.152 }, 00:10:14.152 "claimed": true, 00:10:14.152 "claim_type": "exclusive_write", 00:10:14.152 "zoned": false, 00:10:14.152 "supported_io_types": { 00:10:14.152 "read": true, 00:10:14.152 "write": true, 00:10:14.152 "unmap": true, 00:10:14.152 "flush": true, 00:10:14.152 "reset": true, 00:10:14.152 "nvme_admin": false, 00:10:14.152 "nvme_io": false, 00:10:14.152 "nvme_io_md": false, 00:10:14.152 "write_zeroes": true, 00:10:14.152 "zcopy": true, 00:10:14.152 "get_zone_info": false, 00:10:14.152 "zone_management": false, 00:10:14.152 "zone_append": false, 00:10:14.152 "compare": false, 00:10:14.152 "compare_and_write": false, 00:10:14.152 "abort": true, 00:10:14.152 "seek_hole": false, 00:10:14.152 "seek_data": false, 00:10:14.152 "copy": true, 00:10:14.152 "nvme_iov_md": false 00:10:14.152 }, 00:10:14.152 "memory_domains": [ 00:10:14.152 { 00:10:14.152 "dma_device_id": "system", 00:10:14.152 "dma_device_type": 1 00:10:14.152 }, 00:10:14.152 { 00:10:14.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.152 "dma_device_type": 2 00:10:14.152 } 00:10:14.152 ], 00:10:14.152 "driver_specific": {} 00:10:14.152 } 00:10:14.152 ] 00:10:14.152 18:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:14.152 18:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:14.152 18:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:14.152 18:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:10:14.152 18:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:14.152 18:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:14.152 18:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:14.152 18:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:14.152 18:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:14.152 18:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:14.152 18:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:14.152 18:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:14.152 18:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:14.152 18:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:14.152 18:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.409 18:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:14.409 "name": "Existed_Raid", 00:10:14.409 "uuid": "0750aef7-a7bf-4237-b2a7-6c7c5fdc0484", 00:10:14.409 "strip_size_kb": 64, 00:10:14.409 "state": "online", 00:10:14.409 "raid_level": "raid0", 00:10:14.409 "superblock": true, 00:10:14.409 "num_base_bdevs": 2, 00:10:14.409 "num_base_bdevs_discovered": 2, 00:10:14.409 "num_base_bdevs_operational": 2, 00:10:14.410 "base_bdevs_list": [ 00:10:14.410 { 00:10:14.410 "name": "BaseBdev1", 00:10:14.410 "uuid": "bd7916a3-3ba3-4e50-a5ea-55b7991b2130", 00:10:14.410 "is_configured": true, 00:10:14.410 "data_offset": 2048, 00:10:14.410 "data_size": 63488 00:10:14.410 }, 00:10:14.410 { 00:10:14.410 "name": "BaseBdev2", 00:10:14.410 "uuid": "09ec02bf-ab9e-4b65-af71-f5e437eee058", 00:10:14.410 "is_configured": true, 00:10:14.410 "data_offset": 2048, 00:10:14.410 "data_size": 63488 00:10:14.410 } 00:10:14.410 ] 00:10:14.410 }' 00:10:14.410 18:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:14.410 18:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.342 18:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:15.342 18:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:15.342 18:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:15.342 18:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:15.342 18:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:15.342 18:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:10:15.342 18:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:15.342 18:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:15.342 [2024-07-15 18:25:00.820510] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.342 18:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:15.342 "name": "Existed_Raid", 00:10:15.342 "aliases": [ 00:10:15.342 "0750aef7-a7bf-4237-b2a7-6c7c5fdc0484" 00:10:15.342 ], 00:10:15.342 "product_name": "Raid Volume", 00:10:15.342 "block_size": 512, 00:10:15.342 "num_blocks": 126976, 00:10:15.342 "uuid": "0750aef7-a7bf-4237-b2a7-6c7c5fdc0484", 00:10:15.342 "assigned_rate_limits": { 00:10:15.342 "rw_ios_per_sec": 0, 00:10:15.342 "rw_mbytes_per_sec": 0, 00:10:15.342 "r_mbytes_per_sec": 0, 00:10:15.342 "w_mbytes_per_sec": 0 00:10:15.342 }, 00:10:15.342 "claimed": false, 00:10:15.342 "zoned": false, 00:10:15.342 "supported_io_types": { 00:10:15.342 "read": true, 00:10:15.342 "write": true, 00:10:15.342 "unmap": true, 00:10:15.342 "flush": true, 00:10:15.342 "reset": true, 00:10:15.342 "nvme_admin": false, 00:10:15.342 "nvme_io": false, 00:10:15.342 "nvme_io_md": false, 00:10:15.342 "write_zeroes": true, 00:10:15.342 "zcopy": false, 00:10:15.342 "get_zone_info": false, 00:10:15.342 "zone_management": false, 00:10:15.342 "zone_append": false, 00:10:15.342 "compare": false, 00:10:15.342 "compare_and_write": false, 00:10:15.342 "abort": false, 00:10:15.342 "seek_hole": false, 00:10:15.342 "seek_data": false, 00:10:15.342 "copy": false, 00:10:15.342 "nvme_iov_md": false 00:10:15.342 }, 00:10:15.342 "memory_domains": [ 00:10:15.342 { 00:10:15.342 "dma_device_id": "system", 00:10:15.342 "dma_device_type": 1 00:10:15.342 }, 00:10:15.342 { 00:10:15.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.342 "dma_device_type": 2 00:10:15.342 }, 00:10:15.342 { 00:10:15.342 "dma_device_id": "system", 00:10:15.342 "dma_device_type": 1 00:10:15.342 }, 00:10:15.342 { 00:10:15.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.342 "dma_device_type": 2 00:10:15.342 } 00:10:15.342 ], 00:10:15.342 "driver_specific": { 00:10:15.342 "raid": { 00:10:15.342 "uuid": "0750aef7-a7bf-4237-b2a7-6c7c5fdc0484", 00:10:15.342 "strip_size_kb": 64, 00:10:15.342 "state": "online", 00:10:15.342 "raid_level": "raid0", 00:10:15.342 "superblock": true, 00:10:15.342 "num_base_bdevs": 2, 00:10:15.342 "num_base_bdevs_discovered": 2, 00:10:15.342 "num_base_bdevs_operational": 2, 00:10:15.342 "base_bdevs_list": [ 00:10:15.342 { 00:10:15.342 "name": "BaseBdev1", 00:10:15.342 "uuid": "bd7916a3-3ba3-4e50-a5ea-55b7991b2130", 00:10:15.342 "is_configured": true, 00:10:15.342 "data_offset": 2048, 00:10:15.342 "data_size": 63488 00:10:15.342 }, 00:10:15.342 { 00:10:15.342 "name": "BaseBdev2", 00:10:15.342 "uuid": "09ec02bf-ab9e-4b65-af71-f5e437eee058", 00:10:15.342 "is_configured": true, 00:10:15.342 "data_offset": 2048, 00:10:15.342 "data_size": 63488 00:10:15.342 } 00:10:15.342 ] 00:10:15.342 } 00:10:15.343 } 00:10:15.343 }' 00:10:15.343 18:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:15.343 18:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:15.343 BaseBdev2' 00:10:15.343 18:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:15.600 18:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:15.600 18:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:15.600 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:15.600 "name": "BaseBdev1", 00:10:15.600 "aliases": [ 00:10:15.600 "bd7916a3-3ba3-4e50-a5ea-55b7991b2130" 00:10:15.600 ], 00:10:15.600 "product_name": "Malloc disk", 00:10:15.600 "block_size": 512, 00:10:15.600 "num_blocks": 65536, 00:10:15.600 "uuid": "bd7916a3-3ba3-4e50-a5ea-55b7991b2130", 00:10:15.600 "assigned_rate_limits": { 00:10:15.600 "rw_ios_per_sec": 0, 00:10:15.600 "rw_mbytes_per_sec": 0, 00:10:15.600 "r_mbytes_per_sec": 0, 00:10:15.600 "w_mbytes_per_sec": 0 00:10:15.600 }, 00:10:15.601 "claimed": true, 00:10:15.601 "claim_type": "exclusive_write", 00:10:15.601 "zoned": false, 00:10:15.601 "supported_io_types": { 00:10:15.601 "read": true, 00:10:15.601 "write": true, 00:10:15.601 "unmap": true, 00:10:15.601 "flush": true, 00:10:15.601 "reset": true, 00:10:15.601 "nvme_admin": false, 00:10:15.601 "nvme_io": false, 00:10:15.601 "nvme_io_md": false, 00:10:15.601 "write_zeroes": true, 00:10:15.601 "zcopy": true, 00:10:15.601 "get_zone_info": false, 00:10:15.601 "zone_management": false, 00:10:15.601 "zone_append": false, 00:10:15.601 "compare": false, 00:10:15.601 "compare_and_write": false, 00:10:15.601 "abort": true, 00:10:15.601 "seek_hole": false, 00:10:15.601 "seek_data": false, 00:10:15.601 "copy": true, 00:10:15.601 "nvme_iov_md": false 00:10:15.601 }, 00:10:15.601 "memory_domains": [ 00:10:15.601 { 00:10:15.601 "dma_device_id": "system", 00:10:15.601 "dma_device_type": 1 00:10:15.601 }, 00:10:15.601 { 00:10:15.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.601 "dma_device_type": 2 00:10:15.601 } 00:10:15.601 ], 00:10:15.601 "driver_specific": {} 00:10:15.601 }' 00:10:15.601 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:15.601 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:15.859 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:15.859 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:15.859 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:15.859 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:15.859 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:15.859 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:15.859 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:15.859 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:15.859 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:16.118 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:16.118 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:16.118 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:16.118 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:16.376 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:16.376 "name": "BaseBdev2", 00:10:16.376 "aliases": [ 00:10:16.376 "09ec02bf-ab9e-4b65-af71-f5e437eee058" 00:10:16.376 ], 00:10:16.376 "product_name": "Malloc disk", 00:10:16.376 "block_size": 512, 00:10:16.376 "num_blocks": 65536, 00:10:16.376 "uuid": "09ec02bf-ab9e-4b65-af71-f5e437eee058", 00:10:16.376 "assigned_rate_limits": { 00:10:16.376 "rw_ios_per_sec": 0, 00:10:16.376 "rw_mbytes_per_sec": 0, 00:10:16.376 "r_mbytes_per_sec": 0, 00:10:16.376 "w_mbytes_per_sec": 0 00:10:16.376 }, 00:10:16.376 "claimed": true, 00:10:16.376 "claim_type": "exclusive_write", 00:10:16.376 "zoned": false, 00:10:16.376 "supported_io_types": { 00:10:16.376 "read": true, 00:10:16.376 "write": true, 00:10:16.376 "unmap": true, 00:10:16.376 "flush": true, 00:10:16.376 "reset": true, 00:10:16.376 "nvme_admin": false, 00:10:16.376 "nvme_io": false, 00:10:16.376 "nvme_io_md": false, 00:10:16.376 "write_zeroes": true, 00:10:16.376 "zcopy": true, 00:10:16.376 "get_zone_info": false, 00:10:16.376 "zone_management": false, 00:10:16.376 "zone_append": false, 00:10:16.376 "compare": false, 00:10:16.376 "compare_and_write": false, 00:10:16.376 "abort": true, 00:10:16.376 "seek_hole": false, 00:10:16.376 "seek_data": false, 00:10:16.376 "copy": true, 00:10:16.376 "nvme_iov_md": false 00:10:16.376 }, 00:10:16.376 "memory_domains": [ 00:10:16.376 { 00:10:16.376 "dma_device_id": "system", 00:10:16.376 "dma_device_type": 1 00:10:16.376 }, 00:10:16.376 { 00:10:16.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.376 "dma_device_type": 2 00:10:16.376 } 00:10:16.376 ], 00:10:16.376 "driver_specific": {} 00:10:16.376 }' 00:10:16.376 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:16.376 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:16.376 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:16.376 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:16.376 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:16.376 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:16.376 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:16.376 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:16.635 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:16.635 18:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:16.635 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:16.635 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:16.635 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:16.895 [2024-07-15 18:25:02.288236] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:16.895 [2024-07-15 18:25:02.288260] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:16.895 [2024-07-15 18:25:02.288298] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:16.895 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:16.895 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:10:16.895 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:16.895 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:10:16.895 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:10:16.895 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:10:16.895 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:16.895 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:10:16.895 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:16.895 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:16.895 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:10:16.895 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:16.895 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:16.895 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:16.895 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:16.895 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:16.895 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.153 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:17.153 "name": "Existed_Raid", 00:10:17.153 "uuid": "0750aef7-a7bf-4237-b2a7-6c7c5fdc0484", 00:10:17.153 "strip_size_kb": 64, 00:10:17.153 "state": "offline", 00:10:17.153 "raid_level": "raid0", 00:10:17.153 "superblock": true, 00:10:17.153 "num_base_bdevs": 2, 00:10:17.153 "num_base_bdevs_discovered": 1, 00:10:17.153 "num_base_bdevs_operational": 1, 00:10:17.153 "base_bdevs_list": [ 00:10:17.153 { 00:10:17.153 "name": null, 00:10:17.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.153 "is_configured": false, 00:10:17.153 "data_offset": 2048, 00:10:17.153 "data_size": 63488 00:10:17.153 }, 00:10:17.153 { 00:10:17.153 "name": "BaseBdev2", 00:10:17.153 "uuid": "09ec02bf-ab9e-4b65-af71-f5e437eee058", 00:10:17.153 "is_configured": true, 00:10:17.153 "data_offset": 2048, 00:10:17.153 "data_size": 63488 00:10:17.153 } 00:10:17.153 ] 00:10:17.153 }' 00:10:17.153 18:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:17.153 18:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.721 18:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:17.721 18:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:17.721 18:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:17.721 18:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:17.979 18:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:17.979 18:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:17.979 18:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:18.547 [2024-07-15 18:25:03.921702] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:18.547 [2024-07-15 18:25:03.921749] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1be2260 name Existed_Raid, state offline 00:10:18.547 18:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:18.547 18:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:18.547 18:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:18.547 18:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:18.806 18:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:18.806 18:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:18.806 18:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:10:18.806 18:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2757588 00:10:18.806 18:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2757588 ']' 00:10:18.806 18:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2757588 00:10:18.806 18:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:10:18.806 18:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:18.806 18:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2757588 00:10:18.806 18:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:18.806 18:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:18.806 18:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2757588' 00:10:18.806 killing process with pid 2757588 00:10:18.806 18:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2757588 00:10:18.806 [2024-07-15 18:25:04.266449] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:18.806 18:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2757588 00:10:18.806 [2024-07-15 18:25:04.267302] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:19.065 18:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:10:19.065 00:10:19.065 real 0m10.721s 00:10:19.065 user 0m19.583s 00:10:19.065 sys 0m1.501s 00:10:19.065 18:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:19.065 18:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.065 ************************************ 00:10:19.065 END TEST raid_state_function_test_sb 00:10:19.065 ************************************ 00:10:19.065 18:25:04 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:19.065 18:25:04 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:10:19.065 18:25:04 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:19.065 18:25:04 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:19.065 18:25:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:19.065 ************************************ 00:10:19.065 START TEST raid_superblock_test 00:10:19.065 ************************************ 00:10:19.065 18:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 2 00:10:19.065 18:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:10:19.065 18:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:10:19.065 18:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:10:19.065 18:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:10:19.065 18:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:10:19.065 18:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:10:19.065 18:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:10:19.065 18:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:10:19.065 18:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:10:19.065 18:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:10:19.065 18:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:10:19.065 18:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:10:19.065 18:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:10:19.065 18:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:10:19.065 18:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:10:19.065 18:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:10:19.065 18:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2759425 00:10:19.065 18:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2759425 /var/tmp/spdk-raid.sock 00:10:19.065 18:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:10:19.065 18:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2759425 ']' 00:10:19.065 18:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:19.065 18:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:19.065 18:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:19.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:19.065 18:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:19.065 18:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.065 [2024-07-15 18:25:04.598406] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:10:19.065 [2024-07-15 18:25:04.598515] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2759425 ] 00:10:19.324 [2024-07-15 18:25:04.735049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.324 [2024-07-15 18:25:04.829842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.582 [2024-07-15 18:25:04.894997] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:19.582 [2024-07-15 18:25:04.895028] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.519 18:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:20.519 18:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:10:20.519 18:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:10:20.519 18:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:20.520 18:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:10:20.520 18:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:10:20.520 18:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:20.520 18:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:20.520 18:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:20.520 18:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:20.520 18:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:10:20.520 malloc1 00:10:20.520 18:25:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:21.088 [2024-07-15 18:25:06.490357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:21.088 [2024-07-15 18:25:06.490405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.088 [2024-07-15 18:25:06.490423] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1453e20 00:10:21.088 [2024-07-15 18:25:06.490432] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.088 [2024-07-15 18:25:06.492174] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.088 [2024-07-15 18:25:06.492200] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:21.088 pt1 00:10:21.088 18:25:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:21.088 18:25:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:21.088 18:25:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:10:21.088 18:25:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:10:21.088 18:25:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:21.088 18:25:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:21.088 18:25:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:21.088 18:25:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:21.088 18:25:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:10:21.406 malloc2 00:10:21.406 18:25:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:21.991 [2024-07-15 18:25:07.253159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:21.991 [2024-07-15 18:25:07.253205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.991 [2024-07-15 18:25:07.253226] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15fded0 00:10:21.991 [2024-07-15 18:25:07.253235] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.991 [2024-07-15 18:25:07.254879] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.991 [2024-07-15 18:25:07.254906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:21.991 pt2 00:10:21.991 18:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:21.991 18:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:21.991 18:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:10:22.250 [2024-07-15 18:25:07.750489] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:22.250 [2024-07-15 18:25:07.751833] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:22.250 [2024-07-15 18:25:07.751987] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x15fd170 00:10:22.250 [2024-07-15 18:25:07.752001] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:22.250 [2024-07-15 18:25:07.752198] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15ff5d0 00:10:22.250 [2024-07-15 18:25:07.752342] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x15fd170 00:10:22.250 [2024-07-15 18:25:07.752351] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x15fd170 00:10:22.250 [2024-07-15 18:25:07.752456] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.250 18:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:22.250 18:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:22.250 18:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:22.250 18:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:22.250 18:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:22.250 18:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:22.250 18:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:22.250 18:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:22.250 18:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:22.250 18:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:22.250 18:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:22.250 18:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.507 18:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:22.507 "name": "raid_bdev1", 00:10:22.507 "uuid": "de94b1a0-f8ae-406d-a882-27ea8c55b2f0", 00:10:22.507 "strip_size_kb": 64, 00:10:22.507 "state": "online", 00:10:22.507 "raid_level": "raid0", 00:10:22.507 "superblock": true, 00:10:22.507 "num_base_bdevs": 2, 00:10:22.507 "num_base_bdevs_discovered": 2, 00:10:22.507 "num_base_bdevs_operational": 2, 00:10:22.507 "base_bdevs_list": [ 00:10:22.507 { 00:10:22.507 "name": "pt1", 00:10:22.507 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:22.507 "is_configured": true, 00:10:22.508 "data_offset": 2048, 00:10:22.508 "data_size": 63488 00:10:22.508 }, 00:10:22.508 { 00:10:22.508 "name": "pt2", 00:10:22.508 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.508 "is_configured": true, 00:10:22.508 "data_offset": 2048, 00:10:22.508 "data_size": 63488 00:10:22.508 } 00:10:22.508 ] 00:10:22.508 }' 00:10:22.508 18:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:22.508 18:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.443 18:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:10:23.443 18:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:23.443 18:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:23.443 18:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:23.443 18:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:23.443 18:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:23.443 18:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:23.443 18:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:23.443 [2024-07-15 18:25:08.897823] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.443 18:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:23.443 "name": "raid_bdev1", 00:10:23.443 "aliases": [ 00:10:23.443 "de94b1a0-f8ae-406d-a882-27ea8c55b2f0" 00:10:23.443 ], 00:10:23.443 "product_name": "Raid Volume", 00:10:23.443 "block_size": 512, 00:10:23.443 "num_blocks": 126976, 00:10:23.443 "uuid": "de94b1a0-f8ae-406d-a882-27ea8c55b2f0", 00:10:23.443 "assigned_rate_limits": { 00:10:23.443 "rw_ios_per_sec": 0, 00:10:23.443 "rw_mbytes_per_sec": 0, 00:10:23.443 "r_mbytes_per_sec": 0, 00:10:23.443 "w_mbytes_per_sec": 0 00:10:23.443 }, 00:10:23.443 "claimed": false, 00:10:23.443 "zoned": false, 00:10:23.443 "supported_io_types": { 00:10:23.443 "read": true, 00:10:23.443 "write": true, 00:10:23.443 "unmap": true, 00:10:23.443 "flush": true, 00:10:23.443 "reset": true, 00:10:23.443 "nvme_admin": false, 00:10:23.443 "nvme_io": false, 00:10:23.443 "nvme_io_md": false, 00:10:23.443 "write_zeroes": true, 00:10:23.443 "zcopy": false, 00:10:23.443 "get_zone_info": false, 00:10:23.443 "zone_management": false, 00:10:23.443 "zone_append": false, 00:10:23.443 "compare": false, 00:10:23.443 "compare_and_write": false, 00:10:23.443 "abort": false, 00:10:23.443 "seek_hole": false, 00:10:23.443 "seek_data": false, 00:10:23.443 "copy": false, 00:10:23.443 "nvme_iov_md": false 00:10:23.443 }, 00:10:23.443 "memory_domains": [ 00:10:23.443 { 00:10:23.443 "dma_device_id": "system", 00:10:23.443 "dma_device_type": 1 00:10:23.443 }, 00:10:23.443 { 00:10:23.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.443 "dma_device_type": 2 00:10:23.443 }, 00:10:23.443 { 00:10:23.443 "dma_device_id": "system", 00:10:23.443 "dma_device_type": 1 00:10:23.443 }, 00:10:23.443 { 00:10:23.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.443 "dma_device_type": 2 00:10:23.443 } 00:10:23.443 ], 00:10:23.443 "driver_specific": { 00:10:23.443 "raid": { 00:10:23.443 "uuid": "de94b1a0-f8ae-406d-a882-27ea8c55b2f0", 00:10:23.443 "strip_size_kb": 64, 00:10:23.443 "state": "online", 00:10:23.443 "raid_level": "raid0", 00:10:23.443 "superblock": true, 00:10:23.443 "num_base_bdevs": 2, 00:10:23.443 "num_base_bdevs_discovered": 2, 00:10:23.443 "num_base_bdevs_operational": 2, 00:10:23.443 "base_bdevs_list": [ 00:10:23.443 { 00:10:23.443 "name": "pt1", 00:10:23.443 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:23.444 "is_configured": true, 00:10:23.444 "data_offset": 2048, 00:10:23.444 "data_size": 63488 00:10:23.444 }, 00:10:23.444 { 00:10:23.444 "name": "pt2", 00:10:23.444 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:23.444 "is_configured": true, 00:10:23.444 "data_offset": 2048, 00:10:23.444 "data_size": 63488 00:10:23.444 } 00:10:23.444 ] 00:10:23.444 } 00:10:23.444 } 00:10:23.444 }' 00:10:23.444 18:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:23.444 18:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:23.444 pt2' 00:10:23.444 18:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:23.444 18:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:23.444 18:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:23.702 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:23.702 "name": "pt1", 00:10:23.702 "aliases": [ 00:10:23.702 "00000000-0000-0000-0000-000000000001" 00:10:23.702 ], 00:10:23.702 "product_name": "passthru", 00:10:23.702 "block_size": 512, 00:10:23.702 "num_blocks": 65536, 00:10:23.702 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:23.702 "assigned_rate_limits": { 00:10:23.702 "rw_ios_per_sec": 0, 00:10:23.702 "rw_mbytes_per_sec": 0, 00:10:23.702 "r_mbytes_per_sec": 0, 00:10:23.702 "w_mbytes_per_sec": 0 00:10:23.702 }, 00:10:23.702 "claimed": true, 00:10:23.702 "claim_type": "exclusive_write", 00:10:23.702 "zoned": false, 00:10:23.702 "supported_io_types": { 00:10:23.702 "read": true, 00:10:23.702 "write": true, 00:10:23.702 "unmap": true, 00:10:23.702 "flush": true, 00:10:23.702 "reset": true, 00:10:23.702 "nvme_admin": false, 00:10:23.702 "nvme_io": false, 00:10:23.702 "nvme_io_md": false, 00:10:23.702 "write_zeroes": true, 00:10:23.702 "zcopy": true, 00:10:23.702 "get_zone_info": false, 00:10:23.702 "zone_management": false, 00:10:23.702 "zone_append": false, 00:10:23.702 "compare": false, 00:10:23.702 "compare_and_write": false, 00:10:23.702 "abort": true, 00:10:23.702 "seek_hole": false, 00:10:23.702 "seek_data": false, 00:10:23.702 "copy": true, 00:10:23.703 "nvme_iov_md": false 00:10:23.703 }, 00:10:23.703 "memory_domains": [ 00:10:23.703 { 00:10:23.703 "dma_device_id": "system", 00:10:23.703 "dma_device_type": 1 00:10:23.703 }, 00:10:23.703 { 00:10:23.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.703 "dma_device_type": 2 00:10:23.703 } 00:10:23.703 ], 00:10:23.703 "driver_specific": { 00:10:23.703 "passthru": { 00:10:23.703 "name": "pt1", 00:10:23.703 "base_bdev_name": "malloc1" 00:10:23.703 } 00:10:23.703 } 00:10:23.703 }' 00:10:23.703 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:23.962 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:23.962 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:23.962 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:23.962 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:23.962 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:23.962 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:23.962 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:23.962 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:23.962 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:24.221 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:24.221 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:24.221 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:24.221 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:24.221 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:24.481 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:24.481 "name": "pt2", 00:10:24.481 "aliases": [ 00:10:24.481 "00000000-0000-0000-0000-000000000002" 00:10:24.481 ], 00:10:24.481 "product_name": "passthru", 00:10:24.481 "block_size": 512, 00:10:24.481 "num_blocks": 65536, 00:10:24.481 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:24.481 "assigned_rate_limits": { 00:10:24.481 "rw_ios_per_sec": 0, 00:10:24.481 "rw_mbytes_per_sec": 0, 00:10:24.481 "r_mbytes_per_sec": 0, 00:10:24.481 "w_mbytes_per_sec": 0 00:10:24.481 }, 00:10:24.481 "claimed": true, 00:10:24.481 "claim_type": "exclusive_write", 00:10:24.481 "zoned": false, 00:10:24.481 "supported_io_types": { 00:10:24.481 "read": true, 00:10:24.481 "write": true, 00:10:24.481 "unmap": true, 00:10:24.481 "flush": true, 00:10:24.481 "reset": true, 00:10:24.481 "nvme_admin": false, 00:10:24.481 "nvme_io": false, 00:10:24.481 "nvme_io_md": false, 00:10:24.481 "write_zeroes": true, 00:10:24.481 "zcopy": true, 00:10:24.481 "get_zone_info": false, 00:10:24.481 "zone_management": false, 00:10:24.481 "zone_append": false, 00:10:24.481 "compare": false, 00:10:24.481 "compare_and_write": false, 00:10:24.481 "abort": true, 00:10:24.481 "seek_hole": false, 00:10:24.481 "seek_data": false, 00:10:24.481 "copy": true, 00:10:24.481 "nvme_iov_md": false 00:10:24.481 }, 00:10:24.481 "memory_domains": [ 00:10:24.481 { 00:10:24.481 "dma_device_id": "system", 00:10:24.481 "dma_device_type": 1 00:10:24.481 }, 00:10:24.481 { 00:10:24.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.481 "dma_device_type": 2 00:10:24.481 } 00:10:24.481 ], 00:10:24.481 "driver_specific": { 00:10:24.481 "passthru": { 00:10:24.481 "name": "pt2", 00:10:24.481 "base_bdev_name": "malloc2" 00:10:24.481 } 00:10:24.481 } 00:10:24.481 }' 00:10:24.481 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:24.481 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:24.481 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:24.481 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:24.481 18:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:24.481 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:24.481 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:24.739 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:24.739 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:24.739 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:24.739 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:24.739 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:24.739 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:24.739 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:10:24.998 [2024-07-15 18:25:10.409888] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:24.998 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=de94b1a0-f8ae-406d-a882-27ea8c55b2f0 00:10:24.998 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z de94b1a0-f8ae-406d-a882-27ea8c55b2f0 ']' 00:10:24.998 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:25.256 [2024-07-15 18:25:10.670334] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:25.256 [2024-07-15 18:25:10.670354] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:25.256 [2024-07-15 18:25:10.670407] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.256 [2024-07-15 18:25:10.670450] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:25.256 [2024-07-15 18:25:10.670459] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15fd170 name raid_bdev1, state offline 00:10:25.256 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:25.256 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:10:25.515 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:10:25.515 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:10:25.515 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:25.515 18:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:25.774 18:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:25.774 18:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:26.033 18:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:10:26.033 18:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:26.291 18:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:10:26.291 18:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:10:26.292 18:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:10:26.292 18:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:10:26.292 18:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:26.292 18:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:26.292 18:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:26.292 18:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:26.292 18:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:26.292 18:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:26.292 18:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:26.292 18:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:10:26.292 18:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:10:26.551 [2024-07-15 18:25:11.921628] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:26.551 [2024-07-15 18:25:11.923031] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:26.551 [2024-07-15 18:25:11.923081] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:26.551 [2024-07-15 18:25:11.923116] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:26.551 [2024-07-15 18:25:11.923131] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:26.551 [2024-07-15 18:25:11.923138] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15fe900 name raid_bdev1, state configuring 00:10:26.551 request: 00:10:26.551 { 00:10:26.551 "name": "raid_bdev1", 00:10:26.551 "raid_level": "raid0", 00:10:26.551 "base_bdevs": [ 00:10:26.551 "malloc1", 00:10:26.551 "malloc2" 00:10:26.551 ], 00:10:26.551 "strip_size_kb": 64, 00:10:26.551 "superblock": false, 00:10:26.551 "method": "bdev_raid_create", 00:10:26.551 "req_id": 1 00:10:26.551 } 00:10:26.551 Got JSON-RPC error response 00:10:26.551 response: 00:10:26.551 { 00:10:26.551 "code": -17, 00:10:26.551 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:26.551 } 00:10:26.551 18:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:10:26.551 18:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:26.551 18:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:26.551 18:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:26.551 18:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:26.551 18:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:10:26.810 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:10:26.810 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:10:26.810 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:27.069 [2024-07-15 18:25:12.426922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:27.069 [2024-07-15 18:25:12.426971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.069 [2024-07-15 18:25:12.426986] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1601200 00:10:27.069 [2024-07-15 18:25:12.426995] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.069 [2024-07-15 18:25:12.428649] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.069 [2024-07-15 18:25:12.428675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:27.069 [2024-07-15 18:25:12.428736] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:27.069 [2024-07-15 18:25:12.428761] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:27.069 pt1 00:10:27.069 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:10:27.069 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:27.069 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:27.069 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:27.069 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:27.069 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:27.069 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:27.069 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:27.069 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:27.069 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:27.069 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:27.069 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.328 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:27.328 "name": "raid_bdev1", 00:10:27.328 "uuid": "de94b1a0-f8ae-406d-a882-27ea8c55b2f0", 00:10:27.328 "strip_size_kb": 64, 00:10:27.328 "state": "configuring", 00:10:27.328 "raid_level": "raid0", 00:10:27.328 "superblock": true, 00:10:27.328 "num_base_bdevs": 2, 00:10:27.328 "num_base_bdevs_discovered": 1, 00:10:27.328 "num_base_bdevs_operational": 2, 00:10:27.328 "base_bdevs_list": [ 00:10:27.328 { 00:10:27.328 "name": "pt1", 00:10:27.328 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:27.328 "is_configured": true, 00:10:27.328 "data_offset": 2048, 00:10:27.328 "data_size": 63488 00:10:27.328 }, 00:10:27.328 { 00:10:27.328 "name": null, 00:10:27.328 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:27.328 "is_configured": false, 00:10:27.328 "data_offset": 2048, 00:10:27.328 "data_size": 63488 00:10:27.328 } 00:10:27.328 ] 00:10:27.328 }' 00:10:27.328 18:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:27.328 18:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.893 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:10:27.893 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:10:27.893 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:27.893 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:28.152 [2024-07-15 18:25:13.561991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:28.152 [2024-07-15 18:25:13.562037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.152 [2024-07-15 18:25:13.562052] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1454050 00:10:28.152 [2024-07-15 18:25:13.562062] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.152 [2024-07-15 18:25:13.562391] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.152 [2024-07-15 18:25:13.562405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:28.152 [2024-07-15 18:25:13.562462] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:28.152 [2024-07-15 18:25:13.562479] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:28.152 [2024-07-15 18:25:13.562572] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1452fe0 00:10:28.152 [2024-07-15 18:25:13.562581] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:28.152 [2024-07-15 18:25:13.562755] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1600c60 00:10:28.152 [2024-07-15 18:25:13.562878] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1452fe0 00:10:28.152 [2024-07-15 18:25:13.562886] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1452fe0 00:10:28.152 [2024-07-15 18:25:13.562995] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.152 pt2 00:10:28.152 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:10:28.152 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:28.152 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:28.152 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:28.152 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:28.152 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:28.152 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:28.152 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:28.152 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:28.152 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:28.152 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:28.152 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:28.152 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:28.152 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.411 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:28.411 "name": "raid_bdev1", 00:10:28.411 "uuid": "de94b1a0-f8ae-406d-a882-27ea8c55b2f0", 00:10:28.411 "strip_size_kb": 64, 00:10:28.411 "state": "online", 00:10:28.411 "raid_level": "raid0", 00:10:28.411 "superblock": true, 00:10:28.411 "num_base_bdevs": 2, 00:10:28.411 "num_base_bdevs_discovered": 2, 00:10:28.411 "num_base_bdevs_operational": 2, 00:10:28.411 "base_bdevs_list": [ 00:10:28.411 { 00:10:28.411 "name": "pt1", 00:10:28.411 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:28.411 "is_configured": true, 00:10:28.411 "data_offset": 2048, 00:10:28.411 "data_size": 63488 00:10:28.411 }, 00:10:28.411 { 00:10:28.411 "name": "pt2", 00:10:28.411 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:28.411 "is_configured": true, 00:10:28.411 "data_offset": 2048, 00:10:28.411 "data_size": 63488 00:10:28.411 } 00:10:28.411 ] 00:10:28.411 }' 00:10:28.411 18:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:28.411 18:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.978 18:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:10:28.978 18:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:28.978 18:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:28.978 18:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:28.978 18:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:28.978 18:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:28.978 18:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:28.978 18:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:29.236 [2024-07-15 18:25:14.705338] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:29.236 18:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:29.236 "name": "raid_bdev1", 00:10:29.236 "aliases": [ 00:10:29.236 "de94b1a0-f8ae-406d-a882-27ea8c55b2f0" 00:10:29.236 ], 00:10:29.236 "product_name": "Raid Volume", 00:10:29.236 "block_size": 512, 00:10:29.236 "num_blocks": 126976, 00:10:29.236 "uuid": "de94b1a0-f8ae-406d-a882-27ea8c55b2f0", 00:10:29.236 "assigned_rate_limits": { 00:10:29.236 "rw_ios_per_sec": 0, 00:10:29.236 "rw_mbytes_per_sec": 0, 00:10:29.236 "r_mbytes_per_sec": 0, 00:10:29.236 "w_mbytes_per_sec": 0 00:10:29.236 }, 00:10:29.236 "claimed": false, 00:10:29.236 "zoned": false, 00:10:29.236 "supported_io_types": { 00:10:29.236 "read": true, 00:10:29.236 "write": true, 00:10:29.236 "unmap": true, 00:10:29.237 "flush": true, 00:10:29.237 "reset": true, 00:10:29.237 "nvme_admin": false, 00:10:29.237 "nvme_io": false, 00:10:29.237 "nvme_io_md": false, 00:10:29.237 "write_zeroes": true, 00:10:29.237 "zcopy": false, 00:10:29.237 "get_zone_info": false, 00:10:29.237 "zone_management": false, 00:10:29.237 "zone_append": false, 00:10:29.237 "compare": false, 00:10:29.237 "compare_and_write": false, 00:10:29.237 "abort": false, 00:10:29.237 "seek_hole": false, 00:10:29.237 "seek_data": false, 00:10:29.237 "copy": false, 00:10:29.237 "nvme_iov_md": false 00:10:29.237 }, 00:10:29.237 "memory_domains": [ 00:10:29.237 { 00:10:29.237 "dma_device_id": "system", 00:10:29.237 "dma_device_type": 1 00:10:29.237 }, 00:10:29.237 { 00:10:29.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.237 "dma_device_type": 2 00:10:29.237 }, 00:10:29.237 { 00:10:29.237 "dma_device_id": "system", 00:10:29.237 "dma_device_type": 1 00:10:29.237 }, 00:10:29.237 { 00:10:29.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.237 "dma_device_type": 2 00:10:29.237 } 00:10:29.237 ], 00:10:29.237 "driver_specific": { 00:10:29.237 "raid": { 00:10:29.237 "uuid": "de94b1a0-f8ae-406d-a882-27ea8c55b2f0", 00:10:29.237 "strip_size_kb": 64, 00:10:29.237 "state": "online", 00:10:29.237 "raid_level": "raid0", 00:10:29.237 "superblock": true, 00:10:29.237 "num_base_bdevs": 2, 00:10:29.237 "num_base_bdevs_discovered": 2, 00:10:29.237 "num_base_bdevs_operational": 2, 00:10:29.237 "base_bdevs_list": [ 00:10:29.237 { 00:10:29.237 "name": "pt1", 00:10:29.237 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:29.237 "is_configured": true, 00:10:29.237 "data_offset": 2048, 00:10:29.237 "data_size": 63488 00:10:29.237 }, 00:10:29.237 { 00:10:29.237 "name": "pt2", 00:10:29.237 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:29.237 "is_configured": true, 00:10:29.237 "data_offset": 2048, 00:10:29.237 "data_size": 63488 00:10:29.237 } 00:10:29.237 ] 00:10:29.237 } 00:10:29.237 } 00:10:29.237 }' 00:10:29.237 18:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:29.237 18:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:29.237 pt2' 00:10:29.237 18:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:29.237 18:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:29.237 18:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:29.496 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:29.496 "name": "pt1", 00:10:29.496 "aliases": [ 00:10:29.496 "00000000-0000-0000-0000-000000000001" 00:10:29.496 ], 00:10:29.496 "product_name": "passthru", 00:10:29.496 "block_size": 512, 00:10:29.496 "num_blocks": 65536, 00:10:29.496 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:29.496 "assigned_rate_limits": { 00:10:29.496 "rw_ios_per_sec": 0, 00:10:29.496 "rw_mbytes_per_sec": 0, 00:10:29.496 "r_mbytes_per_sec": 0, 00:10:29.496 "w_mbytes_per_sec": 0 00:10:29.496 }, 00:10:29.496 "claimed": true, 00:10:29.496 "claim_type": "exclusive_write", 00:10:29.496 "zoned": false, 00:10:29.496 "supported_io_types": { 00:10:29.496 "read": true, 00:10:29.496 "write": true, 00:10:29.496 "unmap": true, 00:10:29.496 "flush": true, 00:10:29.496 "reset": true, 00:10:29.496 "nvme_admin": false, 00:10:29.496 "nvme_io": false, 00:10:29.496 "nvme_io_md": false, 00:10:29.496 "write_zeroes": true, 00:10:29.496 "zcopy": true, 00:10:29.496 "get_zone_info": false, 00:10:29.496 "zone_management": false, 00:10:29.496 "zone_append": false, 00:10:29.496 "compare": false, 00:10:29.496 "compare_and_write": false, 00:10:29.496 "abort": true, 00:10:29.496 "seek_hole": false, 00:10:29.496 "seek_data": false, 00:10:29.496 "copy": true, 00:10:29.496 "nvme_iov_md": false 00:10:29.496 }, 00:10:29.496 "memory_domains": [ 00:10:29.496 { 00:10:29.496 "dma_device_id": "system", 00:10:29.496 "dma_device_type": 1 00:10:29.496 }, 00:10:29.496 { 00:10:29.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.496 "dma_device_type": 2 00:10:29.496 } 00:10:29.496 ], 00:10:29.496 "driver_specific": { 00:10:29.496 "passthru": { 00:10:29.496 "name": "pt1", 00:10:29.496 "base_bdev_name": "malloc1" 00:10:29.496 } 00:10:29.496 } 00:10:29.496 }' 00:10:29.496 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:29.754 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:29.754 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:29.754 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:29.754 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:29.754 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:29.754 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:29.754 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:29.754 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:30.013 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:30.013 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:30.013 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:30.013 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:30.013 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:30.013 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:30.274 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:30.274 "name": "pt2", 00:10:30.274 "aliases": [ 00:10:30.274 "00000000-0000-0000-0000-000000000002" 00:10:30.274 ], 00:10:30.274 "product_name": "passthru", 00:10:30.274 "block_size": 512, 00:10:30.274 "num_blocks": 65536, 00:10:30.274 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:30.274 "assigned_rate_limits": { 00:10:30.274 "rw_ios_per_sec": 0, 00:10:30.274 "rw_mbytes_per_sec": 0, 00:10:30.274 "r_mbytes_per_sec": 0, 00:10:30.274 "w_mbytes_per_sec": 0 00:10:30.274 }, 00:10:30.274 "claimed": true, 00:10:30.274 "claim_type": "exclusive_write", 00:10:30.274 "zoned": false, 00:10:30.274 "supported_io_types": { 00:10:30.274 "read": true, 00:10:30.274 "write": true, 00:10:30.274 "unmap": true, 00:10:30.274 "flush": true, 00:10:30.274 "reset": true, 00:10:30.274 "nvme_admin": false, 00:10:30.274 "nvme_io": false, 00:10:30.274 "nvme_io_md": false, 00:10:30.274 "write_zeroes": true, 00:10:30.274 "zcopy": true, 00:10:30.274 "get_zone_info": false, 00:10:30.274 "zone_management": false, 00:10:30.274 "zone_append": false, 00:10:30.274 "compare": false, 00:10:30.274 "compare_and_write": false, 00:10:30.274 "abort": true, 00:10:30.274 "seek_hole": false, 00:10:30.274 "seek_data": false, 00:10:30.274 "copy": true, 00:10:30.274 "nvme_iov_md": false 00:10:30.274 }, 00:10:30.274 "memory_domains": [ 00:10:30.274 { 00:10:30.274 "dma_device_id": "system", 00:10:30.274 "dma_device_type": 1 00:10:30.274 }, 00:10:30.274 { 00:10:30.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.274 "dma_device_type": 2 00:10:30.274 } 00:10:30.274 ], 00:10:30.274 "driver_specific": { 00:10:30.274 "passthru": { 00:10:30.274 "name": "pt2", 00:10:30.275 "base_bdev_name": "malloc2" 00:10:30.275 } 00:10:30.275 } 00:10:30.275 }' 00:10:30.275 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:30.275 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:30.275 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:30.275 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:30.275 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:30.533 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:30.533 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:30.533 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:30.533 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:30.533 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:30.533 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:30.533 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:30.533 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:30.533 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:10:30.792 [2024-07-15 18:25:16.265546] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:30.792 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' de94b1a0-f8ae-406d-a882-27ea8c55b2f0 '!=' de94b1a0-f8ae-406d-a882-27ea8c55b2f0 ']' 00:10:30.792 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:10:30.792 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:30.792 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:30.792 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2759425 00:10:30.792 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2759425 ']' 00:10:30.792 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2759425 00:10:30.792 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:10:30.792 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:30.792 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2759425 00:10:30.792 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:30.792 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:30.792 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2759425' 00:10:30.792 killing process with pid 2759425 00:10:30.792 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2759425 00:10:30.792 [2024-07-15 18:25:16.330321] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:30.792 [2024-07-15 18:25:16.330377] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.792 [2024-07-15 18:25:16.330418] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:30.792 [2024-07-15 18:25:16.330426] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1452fe0 name raid_bdev1, state offline 00:10:30.793 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2759425 00:10:31.052 [2024-07-15 18:25:16.346953] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:31.052 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:10:31.052 00:10:31.052 real 0m12.038s 00:10:31.052 user 0m22.084s 00:10:31.052 sys 0m1.704s 00:10:31.052 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:31.052 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.052 ************************************ 00:10:31.052 END TEST raid_superblock_test 00:10:31.052 ************************************ 00:10:31.052 18:25:16 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:31.052 18:25:16 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:10:31.052 18:25:16 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:31.052 18:25:16 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:31.052 18:25:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:31.052 ************************************ 00:10:31.052 START TEST raid_read_error_test 00:10:31.052 ************************************ 00:10:31.052 18:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 read 00:10:31.052 18:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:10:31.052 18:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:10:31.052 18:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:10:31.052 18:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:31.052 18:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:31.052 18:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:31.052 18:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:31.052 18:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:31.052 18:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:31.052 18:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:31.053 18:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:31.053 18:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:31.053 18:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:31.053 18:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:31.053 18:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:31.053 18:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:31.053 18:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:31.053 18:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:31.053 18:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:10:31.053 18:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:31.053 18:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:31.053 18:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:31.053 18:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.GHs4p7cTa1 00:10:31.053 18:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2761473 00:10:31.053 18:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2761473 /var/tmp/spdk-raid.sock 00:10:31.053 18:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:31.053 18:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2761473 ']' 00:10:31.053 18:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:31.053 18:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:31.053 18:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:31.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:31.053 18:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:31.053 18:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.310 [2024-07-15 18:25:16.650509] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:10:31.310 [2024-07-15 18:25:16.650569] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2761473 ] 00:10:31.310 [2024-07-15 18:25:16.741267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.310 [2024-07-15 18:25:16.836145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.567 [2024-07-15 18:25:16.892091] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.567 [2024-07-15 18:25:16.892117] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.567 18:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:31.567 18:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:10:31.567 18:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:31.567 18:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:31.824 BaseBdev1_malloc 00:10:31.824 18:25:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:32.082 true 00:10:32.082 18:25:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:32.341 [2024-07-15 18:25:17.695777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:32.341 [2024-07-15 18:25:17.695816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.341 [2024-07-15 18:25:17.695833] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf9ad20 00:10:32.341 [2024-07-15 18:25:17.695842] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.341 [2024-07-15 18:25:17.697708] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.341 [2024-07-15 18:25:17.697735] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:32.341 BaseBdev1 00:10:32.341 18:25:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:32.341 18:25:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:32.600 BaseBdev2_malloc 00:10:32.600 18:25:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:32.858 true 00:10:32.858 18:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:33.117 [2024-07-15 18:25:18.454396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:33.117 [2024-07-15 18:25:18.454436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.117 [2024-07-15 18:25:18.454453] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf9fd50 00:10:33.117 [2024-07-15 18:25:18.454462] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.117 [2024-07-15 18:25:18.456050] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.117 [2024-07-15 18:25:18.456076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:33.117 BaseBdev2 00:10:33.117 18:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:10:33.377 [2024-07-15 18:25:18.707094] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:33.377 [2024-07-15 18:25:18.708473] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:33.377 [2024-07-15 18:25:18.708659] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xfa10e0 00:10:33.377 [2024-07-15 18:25:18.708671] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:33.377 [2024-07-15 18:25:18.708868] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xfa97d0 00:10:33.377 [2024-07-15 18:25:18.709033] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xfa10e0 00:10:33.377 [2024-07-15 18:25:18.709043] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xfa10e0 00:10:33.377 [2024-07-15 18:25:18.709155] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.377 18:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:33.377 18:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:33.377 18:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:33.377 18:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:33.377 18:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:33.377 18:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:33.377 18:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:33.377 18:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:33.377 18:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:33.377 18:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:33.377 18:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:33.377 18:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.636 18:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:33.636 "name": "raid_bdev1", 00:10:33.636 "uuid": "fdf52957-0706-47fd-8fe6-884e31b2566a", 00:10:33.636 "strip_size_kb": 64, 00:10:33.636 "state": "online", 00:10:33.636 "raid_level": "raid0", 00:10:33.636 "superblock": true, 00:10:33.636 "num_base_bdevs": 2, 00:10:33.636 "num_base_bdevs_discovered": 2, 00:10:33.636 "num_base_bdevs_operational": 2, 00:10:33.636 "base_bdevs_list": [ 00:10:33.636 { 00:10:33.636 "name": "BaseBdev1", 00:10:33.636 "uuid": "8bc9adae-be94-5f8e-bf1b-3a23f326bb8b", 00:10:33.636 "is_configured": true, 00:10:33.636 "data_offset": 2048, 00:10:33.636 "data_size": 63488 00:10:33.636 }, 00:10:33.636 { 00:10:33.636 "name": "BaseBdev2", 00:10:33.636 "uuid": "f629c84e-f66b-57ca-ae90-e5dbc3a9655c", 00:10:33.636 "is_configured": true, 00:10:33.636 "data_offset": 2048, 00:10:33.636 "data_size": 63488 00:10:33.636 } 00:10:33.636 ] 00:10:33.636 }' 00:10:33.636 18:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:33.636 18:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.203 18:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:34.203 18:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:34.203 [2024-07-15 18:25:19.710072] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf9cac0 00:10:35.138 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:35.396 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:35.396 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:35.396 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:35.396 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:35.396 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:35.396 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:35.396 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:35.396 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:35.396 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:35.396 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:35.396 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:35.396 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:35.396 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:35.396 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:35.396 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.669 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:35.669 "name": "raid_bdev1", 00:10:35.669 "uuid": "fdf52957-0706-47fd-8fe6-884e31b2566a", 00:10:35.669 "strip_size_kb": 64, 00:10:35.669 "state": "online", 00:10:35.669 "raid_level": "raid0", 00:10:35.669 "superblock": true, 00:10:35.669 "num_base_bdevs": 2, 00:10:35.669 "num_base_bdevs_discovered": 2, 00:10:35.669 "num_base_bdevs_operational": 2, 00:10:35.669 "base_bdevs_list": [ 00:10:35.669 { 00:10:35.669 "name": "BaseBdev1", 00:10:35.669 "uuid": "8bc9adae-be94-5f8e-bf1b-3a23f326bb8b", 00:10:35.669 "is_configured": true, 00:10:35.669 "data_offset": 2048, 00:10:35.669 "data_size": 63488 00:10:35.669 }, 00:10:35.669 { 00:10:35.669 "name": "BaseBdev2", 00:10:35.669 "uuid": "f629c84e-f66b-57ca-ae90-e5dbc3a9655c", 00:10:35.669 "is_configured": true, 00:10:35.669 "data_offset": 2048, 00:10:35.669 "data_size": 63488 00:10:35.669 } 00:10:35.669 ] 00:10:35.669 }' 00:10:35.669 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:35.669 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.238 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:36.497 [2024-07-15 18:25:21.916528] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:36.497 [2024-07-15 18:25:21.916561] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:36.497 [2024-07-15 18:25:21.920104] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:36.497 [2024-07-15 18:25:21.920134] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.497 [2024-07-15 18:25:21.920165] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:36.497 [2024-07-15 18:25:21.920173] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xfa10e0 name raid_bdev1, state offline 00:10:36.497 0 00:10:36.497 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2761473 00:10:36.497 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2761473 ']' 00:10:36.497 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2761473 00:10:36.497 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:10:36.497 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:36.497 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2761473 00:10:36.497 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:36.497 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:36.497 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2761473' 00:10:36.497 killing process with pid 2761473 00:10:36.497 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2761473 00:10:36.497 [2024-07-15 18:25:21.983655] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:36.497 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2761473 00:10:36.497 [2024-07-15 18:25:21.994129] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:36.757 18:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.GHs4p7cTa1 00:10:36.757 18:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:36.757 18:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:36.757 18:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:10:36.757 18:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:10:36.757 18:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:36.757 18:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:36.757 18:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:10:36.757 00:10:36.757 real 0m5.622s 00:10:36.757 user 0m9.315s 00:10:36.757 sys 0m0.853s 00:10:36.757 18:25:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:36.757 18:25:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.757 ************************************ 00:10:36.757 END TEST raid_read_error_test 00:10:36.757 ************************************ 00:10:36.757 18:25:22 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:36.757 18:25:22 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:10:36.758 18:25:22 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:36.758 18:25:22 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:36.758 18:25:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:36.758 ************************************ 00:10:36.758 START TEST raid_write_error_test 00:10:36.758 ************************************ 00:10:36.758 18:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 write 00:10:36.758 18:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:10:36.758 18:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:10:36.758 18:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:10:36.758 18:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:36.758 18:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:36.758 18:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:36.758 18:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:36.758 18:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:36.758 18:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:36.758 18:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:36.758 18:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:36.758 18:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:36.758 18:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:36.758 18:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:36.758 18:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:36.758 18:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:36.758 18:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:36.758 18:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:36.758 18:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:10:36.758 18:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:36.758 18:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:36.758 18:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:36.758 18:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.iaoiedBlgG 00:10:36.758 18:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2762496 00:10:36.758 18:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2762496 /var/tmp/spdk-raid.sock 00:10:36.758 18:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:36.758 18:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2762496 ']' 00:10:36.758 18:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:36.758 18:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:36.758 18:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:36.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:36.758 18:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:36.758 18:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.017 [2024-07-15 18:25:22.315725] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:10:37.017 [2024-07-15 18:25:22.315791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2762496 ] 00:10:37.017 [2024-07-15 18:25:22.415718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.017 [2024-07-15 18:25:22.510937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.017 [2024-07-15 18:25:22.567513] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.017 [2024-07-15 18:25:22.567539] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:38.410 18:25:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:38.410 18:25:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:10:38.410 18:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:38.410 18:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:38.670 BaseBdev1_malloc 00:10:38.670 18:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:38.670 true 00:10:38.670 18:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:38.929 [2024-07-15 18:25:24.430073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:38.929 [2024-07-15 18:25:24.430115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.929 [2024-07-15 18:25:24.430134] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa38d20 00:10:38.929 [2024-07-15 18:25:24.430144] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.929 [2024-07-15 18:25:24.431939] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.929 [2024-07-15 18:25:24.431975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:38.929 BaseBdev1 00:10:38.929 18:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:38.929 18:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:39.190 BaseBdev2_malloc 00:10:39.190 18:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:39.449 true 00:10:39.449 18:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:39.708 [2024-07-15 18:25:25.192624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:39.708 [2024-07-15 18:25:25.192666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.708 [2024-07-15 18:25:25.192684] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa3dd50 00:10:39.708 [2024-07-15 18:25:25.192694] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.708 [2024-07-15 18:25:25.194317] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.708 [2024-07-15 18:25:25.194343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:39.708 BaseBdev2 00:10:39.708 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:10:39.968 [2024-07-15 18:25:25.437310] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:39.968 [2024-07-15 18:25:25.438653] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:39.968 [2024-07-15 18:25:25.438840] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xa3f0e0 00:10:39.968 [2024-07-15 18:25:25.438852] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:39.968 [2024-07-15 18:25:25.439054] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa477d0 00:10:39.968 [2024-07-15 18:25:25.439212] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xa3f0e0 00:10:39.968 [2024-07-15 18:25:25.439221] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xa3f0e0 00:10:39.968 [2024-07-15 18:25:25.439333] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.968 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:39.968 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:39.968 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:39.968 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:39.968 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:39.968 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:39.968 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:39.968 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:39.968 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:39.968 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:39.968 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:39.968 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.227 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:40.227 "name": "raid_bdev1", 00:10:40.227 "uuid": "c6254ba5-0ee6-4380-953f-035a49febdd9", 00:10:40.227 "strip_size_kb": 64, 00:10:40.227 "state": "online", 00:10:40.227 "raid_level": "raid0", 00:10:40.227 "superblock": true, 00:10:40.227 "num_base_bdevs": 2, 00:10:40.227 "num_base_bdevs_discovered": 2, 00:10:40.227 "num_base_bdevs_operational": 2, 00:10:40.227 "base_bdevs_list": [ 00:10:40.227 { 00:10:40.227 "name": "BaseBdev1", 00:10:40.227 "uuid": "1a3c1f1a-c8a5-571a-9f47-2096f1e5143e", 00:10:40.227 "is_configured": true, 00:10:40.227 "data_offset": 2048, 00:10:40.227 "data_size": 63488 00:10:40.227 }, 00:10:40.227 { 00:10:40.227 "name": "BaseBdev2", 00:10:40.227 "uuid": "dbab7e36-d1df-5bff-9919-5445b13c68c0", 00:10:40.227 "is_configured": true, 00:10:40.227 "data_offset": 2048, 00:10:40.227 "data_size": 63488 00:10:40.227 } 00:10:40.227 ] 00:10:40.227 }' 00:10:40.227 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:40.227 18:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.163 18:25:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:41.163 18:25:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:41.163 [2024-07-15 18:25:26.684954] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa3aac0 00:10:42.098 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:42.357 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:42.357 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:42.357 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:42.357 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:42.357 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:42.357 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:42.357 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:42.357 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:42.357 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:42.357 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:42.357 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:42.357 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:42.357 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:42.357 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:42.357 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:42.615 18:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:42.615 "name": "raid_bdev1", 00:10:42.615 "uuid": "c6254ba5-0ee6-4380-953f-035a49febdd9", 00:10:42.615 "strip_size_kb": 64, 00:10:42.615 "state": "online", 00:10:42.615 "raid_level": "raid0", 00:10:42.616 "superblock": true, 00:10:42.616 "num_base_bdevs": 2, 00:10:42.616 "num_base_bdevs_discovered": 2, 00:10:42.616 "num_base_bdevs_operational": 2, 00:10:42.616 "base_bdevs_list": [ 00:10:42.616 { 00:10:42.616 "name": "BaseBdev1", 00:10:42.616 "uuid": "1a3c1f1a-c8a5-571a-9f47-2096f1e5143e", 00:10:42.616 "is_configured": true, 00:10:42.616 "data_offset": 2048, 00:10:42.616 "data_size": 63488 00:10:42.616 }, 00:10:42.616 { 00:10:42.616 "name": "BaseBdev2", 00:10:42.616 "uuid": "dbab7e36-d1df-5bff-9919-5445b13c68c0", 00:10:42.616 "is_configured": true, 00:10:42.616 "data_offset": 2048, 00:10:42.616 "data_size": 63488 00:10:42.616 } 00:10:42.616 ] 00:10:42.616 }' 00:10:42.616 18:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:42.616 18:25:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.551 18:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:43.810 [2024-07-15 18:25:29.206875] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:43.810 [2024-07-15 18:25:29.206909] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:43.810 [2024-07-15 18:25:29.210515] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:43.810 [2024-07-15 18:25:29.210545] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.810 [2024-07-15 18:25:29.210571] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:43.810 [2024-07-15 18:25:29.210579] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa3f0e0 name raid_bdev1, state offline 00:10:43.810 0 00:10:43.810 18:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2762496 00:10:43.810 18:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2762496 ']' 00:10:43.810 18:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2762496 00:10:43.810 18:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:10:43.810 18:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:43.810 18:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2762496 00:10:43.810 18:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:43.810 18:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:43.810 18:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2762496' 00:10:43.810 killing process with pid 2762496 00:10:43.810 18:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2762496 00:10:43.810 [2024-07-15 18:25:29.277495] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:43.810 18:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2762496 00:10:43.810 [2024-07-15 18:25:29.287820] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:44.069 18:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.iaoiedBlgG 00:10:44.069 18:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:44.069 18:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:44.069 18:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.40 00:10:44.069 18:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:10:44.069 18:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:44.069 18:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:44.069 18:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.40 != \0\.\0\0 ]] 00:10:44.069 00:10:44.069 real 0m7.252s 00:10:44.069 user 0m11.957s 00:10:44.069 sys 0m0.948s 00:10:44.069 18:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:44.069 18:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.069 ************************************ 00:10:44.069 END TEST raid_write_error_test 00:10:44.069 ************************************ 00:10:44.069 18:25:29 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:44.069 18:25:29 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:10:44.069 18:25:29 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:10:44.069 18:25:29 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:44.069 18:25:29 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:44.069 18:25:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:44.069 ************************************ 00:10:44.069 START TEST raid_state_function_test 00:10:44.069 ************************************ 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 false 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2763784 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2763784' 00:10:44.069 Process raid pid: 2763784 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2763784 /var/tmp/spdk-raid.sock 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2763784 ']' 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:44.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:44.069 18:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.069 [2024-07-15 18:25:29.605518] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:10:44.069 [2024-07-15 18:25:29.605576] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:44.327 [2024-07-15 18:25:29.703759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.327 [2024-07-15 18:25:29.798113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.327 [2024-07-15 18:25:29.856474] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:44.327 [2024-07-15 18:25:29.856505] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:45.261 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:45.261 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:10:45.261 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:45.827 [2024-07-15 18:25:31.265218] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:45.827 [2024-07-15 18:25:31.265256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:45.827 [2024-07-15 18:25:31.265265] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:45.827 [2024-07-15 18:25:31.265274] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:45.827 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:45.827 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:45.827 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:45.827 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:45.827 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:45.827 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:45.827 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:45.827 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:45.827 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:45.827 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:45.827 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:45.827 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.086 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:46.086 "name": "Existed_Raid", 00:10:46.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.086 "strip_size_kb": 64, 00:10:46.086 "state": "configuring", 00:10:46.086 "raid_level": "concat", 00:10:46.086 "superblock": false, 00:10:46.086 "num_base_bdevs": 2, 00:10:46.086 "num_base_bdevs_discovered": 0, 00:10:46.086 "num_base_bdevs_operational": 2, 00:10:46.086 "base_bdevs_list": [ 00:10:46.086 { 00:10:46.086 "name": "BaseBdev1", 00:10:46.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.086 "is_configured": false, 00:10:46.086 "data_offset": 0, 00:10:46.086 "data_size": 0 00:10:46.086 }, 00:10:46.086 { 00:10:46.086 "name": "BaseBdev2", 00:10:46.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.086 "is_configured": false, 00:10:46.086 "data_offset": 0, 00:10:46.086 "data_size": 0 00:10:46.086 } 00:10:46.086 ] 00:10:46.086 }' 00:10:46.086 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:46.086 18:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.652 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:46.910 [2024-07-15 18:25:32.408124] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:46.910 [2024-07-15 18:25:32.408153] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb2fb80 name Existed_Raid, state configuring 00:10:46.910 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:47.473 [2024-07-15 18:25:32.729004] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:47.473 [2024-07-15 18:25:32.729032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:47.473 [2024-07-15 18:25:32.729039] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:47.473 [2024-07-15 18:25:32.729047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:47.474 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:47.474 [2024-07-15 18:25:32.995074] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:47.474 BaseBdev1 00:10:47.474 18:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:47.474 18:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:10:47.474 18:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:47.474 18:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:47.474 18:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:47.474 18:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:47.474 18:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:47.731 18:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:47.988 [ 00:10:47.988 { 00:10:47.988 "name": "BaseBdev1", 00:10:47.988 "aliases": [ 00:10:47.988 "bb69fe0f-1cd9-4964-a8fc-400d90ca6f85" 00:10:47.988 ], 00:10:47.988 "product_name": "Malloc disk", 00:10:47.988 "block_size": 512, 00:10:47.988 "num_blocks": 65536, 00:10:47.988 "uuid": "bb69fe0f-1cd9-4964-a8fc-400d90ca6f85", 00:10:47.988 "assigned_rate_limits": { 00:10:47.988 "rw_ios_per_sec": 0, 00:10:47.988 "rw_mbytes_per_sec": 0, 00:10:47.988 "r_mbytes_per_sec": 0, 00:10:47.988 "w_mbytes_per_sec": 0 00:10:47.988 }, 00:10:47.988 "claimed": true, 00:10:47.988 "claim_type": "exclusive_write", 00:10:47.988 "zoned": false, 00:10:47.988 "supported_io_types": { 00:10:47.988 "read": true, 00:10:47.988 "write": true, 00:10:47.988 "unmap": true, 00:10:47.988 "flush": true, 00:10:47.988 "reset": true, 00:10:47.988 "nvme_admin": false, 00:10:47.988 "nvme_io": false, 00:10:47.988 "nvme_io_md": false, 00:10:47.988 "write_zeroes": true, 00:10:47.988 "zcopy": true, 00:10:47.988 "get_zone_info": false, 00:10:47.988 "zone_management": false, 00:10:47.988 "zone_append": false, 00:10:47.988 "compare": false, 00:10:47.988 "compare_and_write": false, 00:10:47.988 "abort": true, 00:10:47.988 "seek_hole": false, 00:10:47.988 "seek_data": false, 00:10:47.988 "copy": true, 00:10:47.988 "nvme_iov_md": false 00:10:47.988 }, 00:10:47.988 "memory_domains": [ 00:10:47.988 { 00:10:47.988 "dma_device_id": "system", 00:10:47.988 "dma_device_type": 1 00:10:47.988 }, 00:10:47.988 { 00:10:47.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.988 "dma_device_type": 2 00:10:47.988 } 00:10:47.988 ], 00:10:47.988 "driver_specific": {} 00:10:47.988 } 00:10:47.988 ] 00:10:47.988 18:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:47.988 18:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:47.988 18:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:47.988 18:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:47.988 18:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:47.988 18:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:47.988 18:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:47.988 18:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:47.988 18:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:47.988 18:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:47.988 18:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:47.988 18:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:47.988 18:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.555 18:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:48.555 "name": "Existed_Raid", 00:10:48.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.555 "strip_size_kb": 64, 00:10:48.555 "state": "configuring", 00:10:48.555 "raid_level": "concat", 00:10:48.555 "superblock": false, 00:10:48.555 "num_base_bdevs": 2, 00:10:48.555 "num_base_bdevs_discovered": 1, 00:10:48.555 "num_base_bdevs_operational": 2, 00:10:48.555 "base_bdevs_list": [ 00:10:48.555 { 00:10:48.555 "name": "BaseBdev1", 00:10:48.555 "uuid": "bb69fe0f-1cd9-4964-a8fc-400d90ca6f85", 00:10:48.555 "is_configured": true, 00:10:48.555 "data_offset": 0, 00:10:48.555 "data_size": 65536 00:10:48.555 }, 00:10:48.555 { 00:10:48.555 "name": "BaseBdev2", 00:10:48.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.555 "is_configured": false, 00:10:48.555 "data_offset": 0, 00:10:48.555 "data_size": 0 00:10:48.555 } 00:10:48.555 ] 00:10:48.555 }' 00:10:48.555 18:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:48.555 18:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.123 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:49.123 [2024-07-15 18:25:34.647549] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:49.123 [2024-07-15 18:25:34.647582] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb2f470 name Existed_Raid, state configuring 00:10:49.123 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:49.381 [2024-07-15 18:25:34.908273] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:49.381 [2024-07-15 18:25:34.909774] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:49.381 [2024-07-15 18:25:34.909803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:49.381 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:49.381 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:49.639 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:49.639 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:49.639 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:49.639 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:49.639 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:49.639 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:49.639 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:49.639 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:49.639 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:49.639 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:49.639 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:49.639 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.639 18:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:49.639 "name": "Existed_Raid", 00:10:49.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.639 "strip_size_kb": 64, 00:10:49.639 "state": "configuring", 00:10:49.639 "raid_level": "concat", 00:10:49.639 "superblock": false, 00:10:49.639 "num_base_bdevs": 2, 00:10:49.639 "num_base_bdevs_discovered": 1, 00:10:49.639 "num_base_bdevs_operational": 2, 00:10:49.639 "base_bdevs_list": [ 00:10:49.639 { 00:10:49.639 "name": "BaseBdev1", 00:10:49.639 "uuid": "bb69fe0f-1cd9-4964-a8fc-400d90ca6f85", 00:10:49.639 "is_configured": true, 00:10:49.639 "data_offset": 0, 00:10:49.639 "data_size": 65536 00:10:49.639 }, 00:10:49.639 { 00:10:49.639 "name": "BaseBdev2", 00:10:49.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.639 "is_configured": false, 00:10:49.639 "data_offset": 0, 00:10:49.639 "data_size": 0 00:10:49.639 } 00:10:49.639 ] 00:10:49.639 }' 00:10:49.639 18:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:49.929 18:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.494 18:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:50.752 [2024-07-15 18:25:36.058519] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:50.752 [2024-07-15 18:25:36.058550] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xb30260 00:10:50.752 [2024-07-15 18:25:36.058557] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:50.752 [2024-07-15 18:25:36.058751] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xcd93d0 00:10:50.752 [2024-07-15 18:25:36.058869] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb30260 00:10:50.752 [2024-07-15 18:25:36.058878] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xb30260 00:10:50.752 [2024-07-15 18:25:36.059054] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.752 BaseBdev2 00:10:50.752 18:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:50.752 18:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:50.752 18:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:50.752 18:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:50.752 18:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:50.752 18:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:50.752 18:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:51.009 18:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:51.267 [ 00:10:51.267 { 00:10:51.267 "name": "BaseBdev2", 00:10:51.267 "aliases": [ 00:10:51.267 "55782215-9624-4105-84cb-2171eb6e4b51" 00:10:51.267 ], 00:10:51.267 "product_name": "Malloc disk", 00:10:51.267 "block_size": 512, 00:10:51.267 "num_blocks": 65536, 00:10:51.267 "uuid": "55782215-9624-4105-84cb-2171eb6e4b51", 00:10:51.267 "assigned_rate_limits": { 00:10:51.267 "rw_ios_per_sec": 0, 00:10:51.267 "rw_mbytes_per_sec": 0, 00:10:51.267 "r_mbytes_per_sec": 0, 00:10:51.267 "w_mbytes_per_sec": 0 00:10:51.267 }, 00:10:51.267 "claimed": true, 00:10:51.267 "claim_type": "exclusive_write", 00:10:51.267 "zoned": false, 00:10:51.267 "supported_io_types": { 00:10:51.267 "read": true, 00:10:51.267 "write": true, 00:10:51.267 "unmap": true, 00:10:51.267 "flush": true, 00:10:51.267 "reset": true, 00:10:51.267 "nvme_admin": false, 00:10:51.267 "nvme_io": false, 00:10:51.267 "nvme_io_md": false, 00:10:51.267 "write_zeroes": true, 00:10:51.267 "zcopy": true, 00:10:51.267 "get_zone_info": false, 00:10:51.267 "zone_management": false, 00:10:51.267 "zone_append": false, 00:10:51.267 "compare": false, 00:10:51.267 "compare_and_write": false, 00:10:51.267 "abort": true, 00:10:51.267 "seek_hole": false, 00:10:51.267 "seek_data": false, 00:10:51.267 "copy": true, 00:10:51.267 "nvme_iov_md": false 00:10:51.267 }, 00:10:51.267 "memory_domains": [ 00:10:51.267 { 00:10:51.267 "dma_device_id": "system", 00:10:51.267 "dma_device_type": 1 00:10:51.267 }, 00:10:51.267 { 00:10:51.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.267 "dma_device_type": 2 00:10:51.267 } 00:10:51.267 ], 00:10:51.267 "driver_specific": {} 00:10:51.267 } 00:10:51.267 ] 00:10:51.267 18:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:51.267 18:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:51.267 18:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:51.267 18:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:10:51.267 18:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:51.267 18:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:51.267 18:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:51.267 18:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:51.267 18:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:51.267 18:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:51.267 18:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:51.267 18:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:51.267 18:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:51.267 18:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:51.267 18:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.524 18:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:51.524 "name": "Existed_Raid", 00:10:51.524 "uuid": "20be8755-ca6d-4b45-8d25-4e4034e2d463", 00:10:51.524 "strip_size_kb": 64, 00:10:51.524 "state": "online", 00:10:51.524 "raid_level": "concat", 00:10:51.524 "superblock": false, 00:10:51.524 "num_base_bdevs": 2, 00:10:51.524 "num_base_bdevs_discovered": 2, 00:10:51.525 "num_base_bdevs_operational": 2, 00:10:51.525 "base_bdevs_list": [ 00:10:51.525 { 00:10:51.525 "name": "BaseBdev1", 00:10:51.525 "uuid": "bb69fe0f-1cd9-4964-a8fc-400d90ca6f85", 00:10:51.525 "is_configured": true, 00:10:51.525 "data_offset": 0, 00:10:51.525 "data_size": 65536 00:10:51.525 }, 00:10:51.525 { 00:10:51.525 "name": "BaseBdev2", 00:10:51.525 "uuid": "55782215-9624-4105-84cb-2171eb6e4b51", 00:10:51.525 "is_configured": true, 00:10:51.525 "data_offset": 0, 00:10:51.525 "data_size": 65536 00:10:51.525 } 00:10:51.525 ] 00:10:51.525 }' 00:10:51.525 18:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:51.525 18:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.089 18:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:52.089 18:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:52.089 18:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:52.089 18:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:52.089 18:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:52.089 18:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:52.089 18:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:52.089 18:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:52.348 [2024-07-15 18:25:37.723441] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:52.348 18:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:52.348 "name": "Existed_Raid", 00:10:52.348 "aliases": [ 00:10:52.348 "20be8755-ca6d-4b45-8d25-4e4034e2d463" 00:10:52.348 ], 00:10:52.348 "product_name": "Raid Volume", 00:10:52.348 "block_size": 512, 00:10:52.348 "num_blocks": 131072, 00:10:52.348 "uuid": "20be8755-ca6d-4b45-8d25-4e4034e2d463", 00:10:52.348 "assigned_rate_limits": { 00:10:52.348 "rw_ios_per_sec": 0, 00:10:52.348 "rw_mbytes_per_sec": 0, 00:10:52.348 "r_mbytes_per_sec": 0, 00:10:52.348 "w_mbytes_per_sec": 0 00:10:52.348 }, 00:10:52.348 "claimed": false, 00:10:52.348 "zoned": false, 00:10:52.348 "supported_io_types": { 00:10:52.348 "read": true, 00:10:52.348 "write": true, 00:10:52.348 "unmap": true, 00:10:52.348 "flush": true, 00:10:52.348 "reset": true, 00:10:52.348 "nvme_admin": false, 00:10:52.348 "nvme_io": false, 00:10:52.348 "nvme_io_md": false, 00:10:52.348 "write_zeroes": true, 00:10:52.348 "zcopy": false, 00:10:52.348 "get_zone_info": false, 00:10:52.348 "zone_management": false, 00:10:52.348 "zone_append": false, 00:10:52.348 "compare": false, 00:10:52.348 "compare_and_write": false, 00:10:52.348 "abort": false, 00:10:52.348 "seek_hole": false, 00:10:52.348 "seek_data": false, 00:10:52.348 "copy": false, 00:10:52.348 "nvme_iov_md": false 00:10:52.348 }, 00:10:52.348 "memory_domains": [ 00:10:52.348 { 00:10:52.348 "dma_device_id": "system", 00:10:52.348 "dma_device_type": 1 00:10:52.348 }, 00:10:52.348 { 00:10:52.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.348 "dma_device_type": 2 00:10:52.348 }, 00:10:52.348 { 00:10:52.348 "dma_device_id": "system", 00:10:52.348 "dma_device_type": 1 00:10:52.348 }, 00:10:52.348 { 00:10:52.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.348 "dma_device_type": 2 00:10:52.348 } 00:10:52.348 ], 00:10:52.348 "driver_specific": { 00:10:52.348 "raid": { 00:10:52.348 "uuid": "20be8755-ca6d-4b45-8d25-4e4034e2d463", 00:10:52.348 "strip_size_kb": 64, 00:10:52.348 "state": "online", 00:10:52.348 "raid_level": "concat", 00:10:52.348 "superblock": false, 00:10:52.348 "num_base_bdevs": 2, 00:10:52.348 "num_base_bdevs_discovered": 2, 00:10:52.348 "num_base_bdevs_operational": 2, 00:10:52.348 "base_bdevs_list": [ 00:10:52.348 { 00:10:52.348 "name": "BaseBdev1", 00:10:52.348 "uuid": "bb69fe0f-1cd9-4964-a8fc-400d90ca6f85", 00:10:52.348 "is_configured": true, 00:10:52.348 "data_offset": 0, 00:10:52.348 "data_size": 65536 00:10:52.348 }, 00:10:52.348 { 00:10:52.348 "name": "BaseBdev2", 00:10:52.348 "uuid": "55782215-9624-4105-84cb-2171eb6e4b51", 00:10:52.348 "is_configured": true, 00:10:52.348 "data_offset": 0, 00:10:52.348 "data_size": 65536 00:10:52.348 } 00:10:52.348 ] 00:10:52.348 } 00:10:52.348 } 00:10:52.348 }' 00:10:52.348 18:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:52.348 18:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:52.348 BaseBdev2' 00:10:52.348 18:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:52.348 18:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:52.348 18:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:52.606 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:52.606 "name": "BaseBdev1", 00:10:52.606 "aliases": [ 00:10:52.606 "bb69fe0f-1cd9-4964-a8fc-400d90ca6f85" 00:10:52.606 ], 00:10:52.606 "product_name": "Malloc disk", 00:10:52.606 "block_size": 512, 00:10:52.606 "num_blocks": 65536, 00:10:52.606 "uuid": "bb69fe0f-1cd9-4964-a8fc-400d90ca6f85", 00:10:52.606 "assigned_rate_limits": { 00:10:52.606 "rw_ios_per_sec": 0, 00:10:52.606 "rw_mbytes_per_sec": 0, 00:10:52.606 "r_mbytes_per_sec": 0, 00:10:52.606 "w_mbytes_per_sec": 0 00:10:52.606 }, 00:10:52.606 "claimed": true, 00:10:52.606 "claim_type": "exclusive_write", 00:10:52.606 "zoned": false, 00:10:52.606 "supported_io_types": { 00:10:52.606 "read": true, 00:10:52.606 "write": true, 00:10:52.606 "unmap": true, 00:10:52.606 "flush": true, 00:10:52.606 "reset": true, 00:10:52.606 "nvme_admin": false, 00:10:52.606 "nvme_io": false, 00:10:52.606 "nvme_io_md": false, 00:10:52.606 "write_zeroes": true, 00:10:52.606 "zcopy": true, 00:10:52.606 "get_zone_info": false, 00:10:52.606 "zone_management": false, 00:10:52.606 "zone_append": false, 00:10:52.606 "compare": false, 00:10:52.606 "compare_and_write": false, 00:10:52.606 "abort": true, 00:10:52.606 "seek_hole": false, 00:10:52.606 "seek_data": false, 00:10:52.606 "copy": true, 00:10:52.606 "nvme_iov_md": false 00:10:52.606 }, 00:10:52.606 "memory_domains": [ 00:10:52.606 { 00:10:52.606 "dma_device_id": "system", 00:10:52.606 "dma_device_type": 1 00:10:52.606 }, 00:10:52.606 { 00:10:52.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.606 "dma_device_type": 2 00:10:52.606 } 00:10:52.606 ], 00:10:52.606 "driver_specific": {} 00:10:52.606 }' 00:10:52.606 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:52.606 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:52.606 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:52.606 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:52.864 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:52.864 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:52.864 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:52.864 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:52.864 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:52.864 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:52.864 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:53.121 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:53.121 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:53.121 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:53.121 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:53.380 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:53.380 "name": "BaseBdev2", 00:10:53.380 "aliases": [ 00:10:53.380 "55782215-9624-4105-84cb-2171eb6e4b51" 00:10:53.380 ], 00:10:53.380 "product_name": "Malloc disk", 00:10:53.380 "block_size": 512, 00:10:53.380 "num_blocks": 65536, 00:10:53.380 "uuid": "55782215-9624-4105-84cb-2171eb6e4b51", 00:10:53.380 "assigned_rate_limits": { 00:10:53.380 "rw_ios_per_sec": 0, 00:10:53.380 "rw_mbytes_per_sec": 0, 00:10:53.380 "r_mbytes_per_sec": 0, 00:10:53.380 "w_mbytes_per_sec": 0 00:10:53.380 }, 00:10:53.380 "claimed": true, 00:10:53.380 "claim_type": "exclusive_write", 00:10:53.380 "zoned": false, 00:10:53.380 "supported_io_types": { 00:10:53.380 "read": true, 00:10:53.380 "write": true, 00:10:53.380 "unmap": true, 00:10:53.380 "flush": true, 00:10:53.380 "reset": true, 00:10:53.380 "nvme_admin": false, 00:10:53.380 "nvme_io": false, 00:10:53.380 "nvme_io_md": false, 00:10:53.380 "write_zeroes": true, 00:10:53.380 "zcopy": true, 00:10:53.380 "get_zone_info": false, 00:10:53.380 "zone_management": false, 00:10:53.380 "zone_append": false, 00:10:53.380 "compare": false, 00:10:53.380 "compare_and_write": false, 00:10:53.380 "abort": true, 00:10:53.380 "seek_hole": false, 00:10:53.380 "seek_data": false, 00:10:53.380 "copy": true, 00:10:53.380 "nvme_iov_md": false 00:10:53.380 }, 00:10:53.380 "memory_domains": [ 00:10:53.380 { 00:10:53.380 "dma_device_id": "system", 00:10:53.380 "dma_device_type": 1 00:10:53.380 }, 00:10:53.380 { 00:10:53.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.380 "dma_device_type": 2 00:10:53.380 } 00:10:53.380 ], 00:10:53.380 "driver_specific": {} 00:10:53.380 }' 00:10:53.380 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:53.380 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:53.380 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:53.380 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:53.380 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:53.380 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:53.380 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:53.639 18:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:53.639 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:53.639 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:53.639 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:53.639 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:53.639 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:53.897 [2024-07-15 18:25:39.427795] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:53.897 [2024-07-15 18:25:39.427819] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:53.897 [2024-07-15 18:25:39.427860] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:54.157 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:54.157 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:10:54.157 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:54.157 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:54.157 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:10:54.157 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:10:54.157 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:54.157 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:10:54.157 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:54.157 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:54.157 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:10:54.157 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:54.157 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:54.157 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:54.157 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:54.157 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:54.157 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.157 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:54.157 "name": "Existed_Raid", 00:10:54.157 "uuid": "20be8755-ca6d-4b45-8d25-4e4034e2d463", 00:10:54.157 "strip_size_kb": 64, 00:10:54.157 "state": "offline", 00:10:54.157 "raid_level": "concat", 00:10:54.157 "superblock": false, 00:10:54.157 "num_base_bdevs": 2, 00:10:54.157 "num_base_bdevs_discovered": 1, 00:10:54.157 "num_base_bdevs_operational": 1, 00:10:54.157 "base_bdevs_list": [ 00:10:54.157 { 00:10:54.157 "name": null, 00:10:54.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.157 "is_configured": false, 00:10:54.157 "data_offset": 0, 00:10:54.157 "data_size": 65536 00:10:54.157 }, 00:10:54.157 { 00:10:54.157 "name": "BaseBdev2", 00:10:54.157 "uuid": "55782215-9624-4105-84cb-2171eb6e4b51", 00:10:54.157 "is_configured": true, 00:10:54.157 "data_offset": 0, 00:10:54.157 "data_size": 65536 00:10:54.157 } 00:10:54.157 ] 00:10:54.157 }' 00:10:54.416 18:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:54.416 18:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.385 18:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:55.385 18:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:55.385 18:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:55.385 18:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:55.385 18:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:55.385 18:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:55.385 18:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:55.643 [2024-07-15 18:25:41.093325] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:55.644 [2024-07-15 18:25:41.093374] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb30260 name Existed_Raid, state offline 00:10:55.644 18:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:55.644 18:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:55.644 18:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:55.644 18:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:55.903 18:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:55.903 18:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:55.903 18:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:10:55.903 18:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2763784 00:10:55.903 18:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2763784 ']' 00:10:55.903 18:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2763784 00:10:55.903 18:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:10:55.903 18:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:55.903 18:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2763784 00:10:56.162 18:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:56.162 18:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:56.162 18:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2763784' 00:10:56.162 killing process with pid 2763784 00:10:56.162 18:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2763784 00:10:56.162 [2024-07-15 18:25:41.502321] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:56.162 18:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2763784 00:10:56.162 [2024-07-15 18:25:41.503186] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:56.162 18:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:10:56.162 00:10:56.162 real 0m12.158s 00:10:56.162 user 0m22.328s 00:10:56.162 sys 0m1.696s 00:10:56.162 18:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:56.162 18:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.162 ************************************ 00:10:56.162 END TEST raid_state_function_test 00:10:56.162 ************************************ 00:10:56.421 18:25:41 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:56.421 18:25:41 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:10:56.421 18:25:41 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:56.421 18:25:41 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:56.421 18:25:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:56.421 ************************************ 00:10:56.421 START TEST raid_state_function_test_sb 00:10:56.421 ************************************ 00:10:56.421 18:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 true 00:10:56.421 18:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:10:56.421 18:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:10:56.421 18:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:10:56.421 18:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:56.421 18:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:56.421 18:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:56.421 18:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:56.421 18:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:56.421 18:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:56.421 18:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:56.421 18:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:56.421 18:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:56.421 18:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:56.421 18:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:56.421 18:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:56.421 18:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:56.421 18:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:56.422 18:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:56.422 18:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:10:56.422 18:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:10:56.422 18:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:10:56.422 18:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:10:56.422 18:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:10:56.422 18:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2765924 00:10:56.422 18:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2765924' 00:10:56.422 Process raid pid: 2765924 00:10:56.422 18:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:56.422 18:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2765924 /var/tmp/spdk-raid.sock 00:10:56.422 18:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2765924 ']' 00:10:56.422 18:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:56.422 18:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:56.422 18:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:56.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:56.422 18:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:56.422 18:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.422 [2024-07-15 18:25:41.807901] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:10:56.422 [2024-07-15 18:25:41.807966] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.422 [2024-07-15 18:25:41.905871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.680 [2024-07-15 18:25:42.001755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.680 [2024-07-15 18:25:42.060581] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:56.680 [2024-07-15 18:25:42.060611] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.249 18:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:57.249 18:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:10:57.249 18:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:57.508 [2024-07-15 18:25:42.991621] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:57.508 [2024-07-15 18:25:42.991661] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:57.508 [2024-07-15 18:25:42.991670] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:57.508 [2024-07-15 18:25:42.991678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:57.508 18:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:57.508 18:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:57.508 18:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:57.508 18:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:57.508 18:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:57.508 18:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:57.508 18:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:57.508 18:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:57.508 18:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:57.508 18:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:57.508 18:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:57.508 18:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.075 18:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:58.075 "name": "Existed_Raid", 00:10:58.075 "uuid": "8ca1b8ea-5f50-4fdb-b3e5-adcfe1f53bbc", 00:10:58.075 "strip_size_kb": 64, 00:10:58.075 "state": "configuring", 00:10:58.075 "raid_level": "concat", 00:10:58.075 "superblock": true, 00:10:58.075 "num_base_bdevs": 2, 00:10:58.075 "num_base_bdevs_discovered": 0, 00:10:58.075 "num_base_bdevs_operational": 2, 00:10:58.075 "base_bdevs_list": [ 00:10:58.075 { 00:10:58.075 "name": "BaseBdev1", 00:10:58.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.075 "is_configured": false, 00:10:58.075 "data_offset": 0, 00:10:58.075 "data_size": 0 00:10:58.075 }, 00:10:58.075 { 00:10:58.075 "name": "BaseBdev2", 00:10:58.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.075 "is_configured": false, 00:10:58.075 "data_offset": 0, 00:10:58.075 "data_size": 0 00:10:58.075 } 00:10:58.075 ] 00:10:58.075 }' 00:10:58.075 18:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:58.075 18:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.011 18:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:59.270 [2024-07-15 18:25:44.623895] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:59.270 [2024-07-15 18:25:44.623925] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x99bb80 name Existed_Raid, state configuring 00:10:59.270 18:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:59.529 [2024-07-15 18:25:44.880616] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:59.529 [2024-07-15 18:25:44.880649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:59.529 [2024-07-15 18:25:44.880658] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:59.529 [2024-07-15 18:25:44.880666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:59.529 18:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:59.788 [2024-07-15 18:25:45.134905] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:59.788 BaseBdev1 00:10:59.788 18:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:59.788 18:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:10:59.788 18:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:59.788 18:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:59.788 18:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:59.788 18:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:59.788 18:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:00.047 18:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:00.306 [ 00:11:00.306 { 00:11:00.306 "name": "BaseBdev1", 00:11:00.306 "aliases": [ 00:11:00.306 "5cc4a1bd-da7a-4d42-a9a0-e51a471527ca" 00:11:00.306 ], 00:11:00.306 "product_name": "Malloc disk", 00:11:00.306 "block_size": 512, 00:11:00.306 "num_blocks": 65536, 00:11:00.306 "uuid": "5cc4a1bd-da7a-4d42-a9a0-e51a471527ca", 00:11:00.306 "assigned_rate_limits": { 00:11:00.306 "rw_ios_per_sec": 0, 00:11:00.306 "rw_mbytes_per_sec": 0, 00:11:00.306 "r_mbytes_per_sec": 0, 00:11:00.306 "w_mbytes_per_sec": 0 00:11:00.306 }, 00:11:00.306 "claimed": true, 00:11:00.306 "claim_type": "exclusive_write", 00:11:00.306 "zoned": false, 00:11:00.306 "supported_io_types": { 00:11:00.306 "read": true, 00:11:00.306 "write": true, 00:11:00.306 "unmap": true, 00:11:00.306 "flush": true, 00:11:00.306 "reset": true, 00:11:00.306 "nvme_admin": false, 00:11:00.306 "nvme_io": false, 00:11:00.306 "nvme_io_md": false, 00:11:00.306 "write_zeroes": true, 00:11:00.306 "zcopy": true, 00:11:00.306 "get_zone_info": false, 00:11:00.306 "zone_management": false, 00:11:00.306 "zone_append": false, 00:11:00.306 "compare": false, 00:11:00.306 "compare_and_write": false, 00:11:00.306 "abort": true, 00:11:00.306 "seek_hole": false, 00:11:00.306 "seek_data": false, 00:11:00.306 "copy": true, 00:11:00.306 "nvme_iov_md": false 00:11:00.306 }, 00:11:00.306 "memory_domains": [ 00:11:00.306 { 00:11:00.306 "dma_device_id": "system", 00:11:00.306 "dma_device_type": 1 00:11:00.306 }, 00:11:00.306 { 00:11:00.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.306 "dma_device_type": 2 00:11:00.306 } 00:11:00.306 ], 00:11:00.306 "driver_specific": {} 00:11:00.306 } 00:11:00.306 ] 00:11:00.306 18:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:00.306 18:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:00.306 18:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:00.306 18:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:00.306 18:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:00.306 18:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:00.306 18:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:00.306 18:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:00.306 18:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:00.306 18:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:00.306 18:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:00.306 18:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:00.306 18:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.564 18:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:00.564 "name": "Existed_Raid", 00:11:00.564 "uuid": "e2314628-9b55-4a91-b3da-79a9e2f6054d", 00:11:00.564 "strip_size_kb": 64, 00:11:00.564 "state": "configuring", 00:11:00.564 "raid_level": "concat", 00:11:00.564 "superblock": true, 00:11:00.564 "num_base_bdevs": 2, 00:11:00.564 "num_base_bdevs_discovered": 1, 00:11:00.564 "num_base_bdevs_operational": 2, 00:11:00.564 "base_bdevs_list": [ 00:11:00.564 { 00:11:00.564 "name": "BaseBdev1", 00:11:00.564 "uuid": "5cc4a1bd-da7a-4d42-a9a0-e51a471527ca", 00:11:00.564 "is_configured": true, 00:11:00.564 "data_offset": 2048, 00:11:00.564 "data_size": 63488 00:11:00.564 }, 00:11:00.564 { 00:11:00.564 "name": "BaseBdev2", 00:11:00.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.565 "is_configured": false, 00:11:00.565 "data_offset": 0, 00:11:00.565 "data_size": 0 00:11:00.565 } 00:11:00.565 ] 00:11:00.565 }' 00:11:00.565 18:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:00.565 18:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.130 18:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:01.388 [2024-07-15 18:25:46.831485] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:01.388 [2024-07-15 18:25:46.831525] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x99b470 name Existed_Raid, state configuring 00:11:01.388 18:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:01.648 [2024-07-15 18:25:47.092222] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:01.648 [2024-07-15 18:25:47.093828] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:01.648 [2024-07-15 18:25:47.093859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:01.648 18:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:01.648 18:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:01.648 18:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:01.648 18:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:01.648 18:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:01.648 18:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:01.648 18:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:01.648 18:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:01.648 18:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:01.648 18:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:01.648 18:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:01.648 18:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:01.648 18:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:01.648 18:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.907 18:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:01.907 "name": "Existed_Raid", 00:11:01.907 "uuid": "b1272dd1-952f-4618-bcb2-44cebb0151af", 00:11:01.907 "strip_size_kb": 64, 00:11:01.907 "state": "configuring", 00:11:01.907 "raid_level": "concat", 00:11:01.907 "superblock": true, 00:11:01.907 "num_base_bdevs": 2, 00:11:01.907 "num_base_bdevs_discovered": 1, 00:11:01.907 "num_base_bdevs_operational": 2, 00:11:01.907 "base_bdevs_list": [ 00:11:01.907 { 00:11:01.907 "name": "BaseBdev1", 00:11:01.907 "uuid": "5cc4a1bd-da7a-4d42-a9a0-e51a471527ca", 00:11:01.907 "is_configured": true, 00:11:01.907 "data_offset": 2048, 00:11:01.907 "data_size": 63488 00:11:01.907 }, 00:11:01.907 { 00:11:01.907 "name": "BaseBdev2", 00:11:01.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.907 "is_configured": false, 00:11:01.907 "data_offset": 0, 00:11:01.907 "data_size": 0 00:11:01.907 } 00:11:01.907 ] 00:11:01.907 }' 00:11:01.907 18:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:01.907 18:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.474 18:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:02.733 [2024-07-15 18:25:48.234491] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:02.733 [2024-07-15 18:25:48.234629] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x99c260 00:11:02.734 [2024-07-15 18:25:48.234642] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:02.734 [2024-07-15 18:25:48.234826] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x99b3c0 00:11:02.734 [2024-07-15 18:25:48.234970] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x99c260 00:11:02.734 [2024-07-15 18:25:48.234980] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x99c260 00:11:02.734 [2024-07-15 18:25:48.235076] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:02.734 BaseBdev2 00:11:02.734 18:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:02.734 18:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:02.734 18:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:02.734 18:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:02.734 18:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:02.734 18:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:02.734 18:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:02.993 18:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:03.252 [ 00:11:03.252 { 00:11:03.252 "name": "BaseBdev2", 00:11:03.252 "aliases": [ 00:11:03.252 "c23de589-bc36-4595-92fe-7b8823193f16" 00:11:03.252 ], 00:11:03.252 "product_name": "Malloc disk", 00:11:03.252 "block_size": 512, 00:11:03.252 "num_blocks": 65536, 00:11:03.252 "uuid": "c23de589-bc36-4595-92fe-7b8823193f16", 00:11:03.252 "assigned_rate_limits": { 00:11:03.252 "rw_ios_per_sec": 0, 00:11:03.252 "rw_mbytes_per_sec": 0, 00:11:03.252 "r_mbytes_per_sec": 0, 00:11:03.252 "w_mbytes_per_sec": 0 00:11:03.252 }, 00:11:03.252 "claimed": true, 00:11:03.252 "claim_type": "exclusive_write", 00:11:03.252 "zoned": false, 00:11:03.252 "supported_io_types": { 00:11:03.252 "read": true, 00:11:03.252 "write": true, 00:11:03.252 "unmap": true, 00:11:03.252 "flush": true, 00:11:03.252 "reset": true, 00:11:03.252 "nvme_admin": false, 00:11:03.252 "nvme_io": false, 00:11:03.252 "nvme_io_md": false, 00:11:03.252 "write_zeroes": true, 00:11:03.252 "zcopy": true, 00:11:03.252 "get_zone_info": false, 00:11:03.252 "zone_management": false, 00:11:03.252 "zone_append": false, 00:11:03.252 "compare": false, 00:11:03.252 "compare_and_write": false, 00:11:03.252 "abort": true, 00:11:03.252 "seek_hole": false, 00:11:03.252 "seek_data": false, 00:11:03.252 "copy": true, 00:11:03.252 "nvme_iov_md": false 00:11:03.252 }, 00:11:03.252 "memory_domains": [ 00:11:03.252 { 00:11:03.252 "dma_device_id": "system", 00:11:03.252 "dma_device_type": 1 00:11:03.252 }, 00:11:03.252 { 00:11:03.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.252 "dma_device_type": 2 00:11:03.252 } 00:11:03.252 ], 00:11:03.252 "driver_specific": {} 00:11:03.252 } 00:11:03.252 ] 00:11:03.252 18:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:03.252 18:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:03.252 18:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:03.252 18:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:11:03.252 18:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:03.252 18:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:03.252 18:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:03.252 18:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:03.252 18:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:03.252 18:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:03.252 18:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:03.252 18:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:03.252 18:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:03.252 18:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:03.252 18:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.820 18:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:03.820 "name": "Existed_Raid", 00:11:03.820 "uuid": "b1272dd1-952f-4618-bcb2-44cebb0151af", 00:11:03.820 "strip_size_kb": 64, 00:11:03.820 "state": "online", 00:11:03.820 "raid_level": "concat", 00:11:03.820 "superblock": true, 00:11:03.820 "num_base_bdevs": 2, 00:11:03.820 "num_base_bdevs_discovered": 2, 00:11:03.820 "num_base_bdevs_operational": 2, 00:11:03.820 "base_bdevs_list": [ 00:11:03.820 { 00:11:03.820 "name": "BaseBdev1", 00:11:03.820 "uuid": "5cc4a1bd-da7a-4d42-a9a0-e51a471527ca", 00:11:03.820 "is_configured": true, 00:11:03.820 "data_offset": 2048, 00:11:03.820 "data_size": 63488 00:11:03.820 }, 00:11:03.820 { 00:11:03.820 "name": "BaseBdev2", 00:11:03.820 "uuid": "c23de589-bc36-4595-92fe-7b8823193f16", 00:11:03.820 "is_configured": true, 00:11:03.820 "data_offset": 2048, 00:11:03.820 "data_size": 63488 00:11:03.820 } 00:11:03.820 ] 00:11:03.820 }' 00:11:03.820 18:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:03.820 18:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.448 18:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:04.448 18:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:04.448 18:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:04.448 18:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:04.448 18:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:04.448 18:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:11:04.448 18:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:04.448 18:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:04.707 [2024-07-15 18:25:50.216133] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:04.707 18:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:04.707 "name": "Existed_Raid", 00:11:04.707 "aliases": [ 00:11:04.707 "b1272dd1-952f-4618-bcb2-44cebb0151af" 00:11:04.707 ], 00:11:04.707 "product_name": "Raid Volume", 00:11:04.707 "block_size": 512, 00:11:04.707 "num_blocks": 126976, 00:11:04.707 "uuid": "b1272dd1-952f-4618-bcb2-44cebb0151af", 00:11:04.707 "assigned_rate_limits": { 00:11:04.707 "rw_ios_per_sec": 0, 00:11:04.707 "rw_mbytes_per_sec": 0, 00:11:04.707 "r_mbytes_per_sec": 0, 00:11:04.707 "w_mbytes_per_sec": 0 00:11:04.707 }, 00:11:04.707 "claimed": false, 00:11:04.707 "zoned": false, 00:11:04.707 "supported_io_types": { 00:11:04.707 "read": true, 00:11:04.707 "write": true, 00:11:04.707 "unmap": true, 00:11:04.707 "flush": true, 00:11:04.707 "reset": true, 00:11:04.707 "nvme_admin": false, 00:11:04.707 "nvme_io": false, 00:11:04.707 "nvme_io_md": false, 00:11:04.707 "write_zeroes": true, 00:11:04.707 "zcopy": false, 00:11:04.707 "get_zone_info": false, 00:11:04.707 "zone_management": false, 00:11:04.707 "zone_append": false, 00:11:04.707 "compare": false, 00:11:04.707 "compare_and_write": false, 00:11:04.707 "abort": false, 00:11:04.707 "seek_hole": false, 00:11:04.707 "seek_data": false, 00:11:04.707 "copy": false, 00:11:04.707 "nvme_iov_md": false 00:11:04.707 }, 00:11:04.707 "memory_domains": [ 00:11:04.707 { 00:11:04.707 "dma_device_id": "system", 00:11:04.707 "dma_device_type": 1 00:11:04.707 }, 00:11:04.707 { 00:11:04.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.707 "dma_device_type": 2 00:11:04.707 }, 00:11:04.707 { 00:11:04.707 "dma_device_id": "system", 00:11:04.707 "dma_device_type": 1 00:11:04.707 }, 00:11:04.707 { 00:11:04.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.707 "dma_device_type": 2 00:11:04.707 } 00:11:04.707 ], 00:11:04.707 "driver_specific": { 00:11:04.707 "raid": { 00:11:04.707 "uuid": "b1272dd1-952f-4618-bcb2-44cebb0151af", 00:11:04.707 "strip_size_kb": 64, 00:11:04.707 "state": "online", 00:11:04.707 "raid_level": "concat", 00:11:04.707 "superblock": true, 00:11:04.707 "num_base_bdevs": 2, 00:11:04.707 "num_base_bdevs_discovered": 2, 00:11:04.707 "num_base_bdevs_operational": 2, 00:11:04.707 "base_bdevs_list": [ 00:11:04.707 { 00:11:04.707 "name": "BaseBdev1", 00:11:04.707 "uuid": "5cc4a1bd-da7a-4d42-a9a0-e51a471527ca", 00:11:04.707 "is_configured": true, 00:11:04.707 "data_offset": 2048, 00:11:04.707 "data_size": 63488 00:11:04.707 }, 00:11:04.707 { 00:11:04.707 "name": "BaseBdev2", 00:11:04.707 "uuid": "c23de589-bc36-4595-92fe-7b8823193f16", 00:11:04.707 "is_configured": true, 00:11:04.707 "data_offset": 2048, 00:11:04.707 "data_size": 63488 00:11:04.707 } 00:11:04.707 ] 00:11:04.707 } 00:11:04.707 } 00:11:04.707 }' 00:11:04.707 18:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:04.966 18:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:04.966 BaseBdev2' 00:11:04.966 18:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:04.966 18:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:04.966 18:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:05.533 18:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:05.533 "name": "BaseBdev1", 00:11:05.533 "aliases": [ 00:11:05.533 "5cc4a1bd-da7a-4d42-a9a0-e51a471527ca" 00:11:05.533 ], 00:11:05.533 "product_name": "Malloc disk", 00:11:05.533 "block_size": 512, 00:11:05.533 "num_blocks": 65536, 00:11:05.533 "uuid": "5cc4a1bd-da7a-4d42-a9a0-e51a471527ca", 00:11:05.533 "assigned_rate_limits": { 00:11:05.533 "rw_ios_per_sec": 0, 00:11:05.533 "rw_mbytes_per_sec": 0, 00:11:05.533 "r_mbytes_per_sec": 0, 00:11:05.533 "w_mbytes_per_sec": 0 00:11:05.533 }, 00:11:05.533 "claimed": true, 00:11:05.533 "claim_type": "exclusive_write", 00:11:05.533 "zoned": false, 00:11:05.533 "supported_io_types": { 00:11:05.533 "read": true, 00:11:05.533 "write": true, 00:11:05.533 "unmap": true, 00:11:05.533 "flush": true, 00:11:05.533 "reset": true, 00:11:05.533 "nvme_admin": false, 00:11:05.533 "nvme_io": false, 00:11:05.533 "nvme_io_md": false, 00:11:05.533 "write_zeroes": true, 00:11:05.533 "zcopy": true, 00:11:05.533 "get_zone_info": false, 00:11:05.533 "zone_management": false, 00:11:05.533 "zone_append": false, 00:11:05.533 "compare": false, 00:11:05.533 "compare_and_write": false, 00:11:05.533 "abort": true, 00:11:05.533 "seek_hole": false, 00:11:05.533 "seek_data": false, 00:11:05.533 "copy": true, 00:11:05.533 "nvme_iov_md": false 00:11:05.533 }, 00:11:05.533 "memory_domains": [ 00:11:05.533 { 00:11:05.533 "dma_device_id": "system", 00:11:05.533 "dma_device_type": 1 00:11:05.533 }, 00:11:05.533 { 00:11:05.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.533 "dma_device_type": 2 00:11:05.533 } 00:11:05.533 ], 00:11:05.533 "driver_specific": {} 00:11:05.533 }' 00:11:05.533 18:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:05.533 18:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:05.533 18:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:05.533 18:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:05.533 18:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:05.533 18:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:05.533 18:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:05.791 18:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:05.791 18:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:05.791 18:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:05.791 18:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:05.791 18:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:05.791 18:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:05.791 18:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:05.791 18:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:06.359 18:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:06.359 "name": "BaseBdev2", 00:11:06.359 "aliases": [ 00:11:06.359 "c23de589-bc36-4595-92fe-7b8823193f16" 00:11:06.359 ], 00:11:06.359 "product_name": "Malloc disk", 00:11:06.359 "block_size": 512, 00:11:06.359 "num_blocks": 65536, 00:11:06.359 "uuid": "c23de589-bc36-4595-92fe-7b8823193f16", 00:11:06.359 "assigned_rate_limits": { 00:11:06.359 "rw_ios_per_sec": 0, 00:11:06.359 "rw_mbytes_per_sec": 0, 00:11:06.359 "r_mbytes_per_sec": 0, 00:11:06.359 "w_mbytes_per_sec": 0 00:11:06.359 }, 00:11:06.359 "claimed": true, 00:11:06.359 "claim_type": "exclusive_write", 00:11:06.359 "zoned": false, 00:11:06.359 "supported_io_types": { 00:11:06.359 "read": true, 00:11:06.359 "write": true, 00:11:06.359 "unmap": true, 00:11:06.359 "flush": true, 00:11:06.359 "reset": true, 00:11:06.359 "nvme_admin": false, 00:11:06.359 "nvme_io": false, 00:11:06.359 "nvme_io_md": false, 00:11:06.359 "write_zeroes": true, 00:11:06.359 "zcopy": true, 00:11:06.359 "get_zone_info": false, 00:11:06.359 "zone_management": false, 00:11:06.359 "zone_append": false, 00:11:06.359 "compare": false, 00:11:06.359 "compare_and_write": false, 00:11:06.359 "abort": true, 00:11:06.359 "seek_hole": false, 00:11:06.359 "seek_data": false, 00:11:06.359 "copy": true, 00:11:06.359 "nvme_iov_md": false 00:11:06.359 }, 00:11:06.359 "memory_domains": [ 00:11:06.359 { 00:11:06.359 "dma_device_id": "system", 00:11:06.359 "dma_device_type": 1 00:11:06.359 }, 00:11:06.359 { 00:11:06.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.359 "dma_device_type": 2 00:11:06.359 } 00:11:06.359 ], 00:11:06.359 "driver_specific": {} 00:11:06.359 }' 00:11:06.359 18:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:06.359 18:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:06.359 18:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:06.359 18:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:06.618 18:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:06.618 18:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:06.618 18:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:06.618 18:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:06.618 18:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:06.618 18:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:06.618 18:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:06.877 18:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:06.877 18:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:07.135 [2024-07-15 18:25:52.650429] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:07.136 [2024-07-15 18:25:52.650454] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:07.136 [2024-07-15 18:25:52.650494] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:07.136 18:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:07.136 18:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:11:07.136 18:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:07.136 18:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:11:07.136 18:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:11:07.136 18:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:11:07.136 18:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:07.136 18:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:11:07.136 18:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:07.136 18:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:07.136 18:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:07.136 18:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:07.136 18:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:07.136 18:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:07.136 18:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:07.394 18:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:07.394 18:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.653 18:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:07.653 "name": "Existed_Raid", 00:11:07.653 "uuid": "b1272dd1-952f-4618-bcb2-44cebb0151af", 00:11:07.653 "strip_size_kb": 64, 00:11:07.653 "state": "offline", 00:11:07.653 "raid_level": "concat", 00:11:07.653 "superblock": true, 00:11:07.653 "num_base_bdevs": 2, 00:11:07.653 "num_base_bdevs_discovered": 1, 00:11:07.653 "num_base_bdevs_operational": 1, 00:11:07.653 "base_bdevs_list": [ 00:11:07.653 { 00:11:07.653 "name": null, 00:11:07.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.653 "is_configured": false, 00:11:07.653 "data_offset": 2048, 00:11:07.653 "data_size": 63488 00:11:07.653 }, 00:11:07.653 { 00:11:07.654 "name": "BaseBdev2", 00:11:07.654 "uuid": "c23de589-bc36-4595-92fe-7b8823193f16", 00:11:07.654 "is_configured": true, 00:11:07.654 "data_offset": 2048, 00:11:07.654 "data_size": 63488 00:11:07.654 } 00:11:07.654 ] 00:11:07.654 }' 00:11:07.654 18:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:07.654 18:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.220 18:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:08.220 18:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:08.220 18:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:08.220 18:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:08.479 18:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:08.479 18:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:08.479 18:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:08.738 [2024-07-15 18:25:54.119548] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:08.738 [2024-07-15 18:25:54.119594] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x99c260 name Existed_Raid, state offline 00:11:08.738 18:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:08.738 18:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:08.738 18:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:08.738 18:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:09.305 18:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:09.305 18:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:09.305 18:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:11:09.305 18:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2765924 00:11:09.305 18:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2765924 ']' 00:11:09.305 18:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2765924 00:11:09.305 18:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:11:09.305 18:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:09.305 18:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2765924 00:11:09.305 18:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:09.305 18:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:09.305 18:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2765924' 00:11:09.305 killing process with pid 2765924 00:11:09.305 18:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2765924 00:11:09.305 [2024-07-15 18:25:54.693227] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:09.305 18:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2765924 00:11:09.305 [2024-07-15 18:25:54.694099] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:09.565 18:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:11:09.565 00:11:09.565 real 0m13.147s 00:11:09.565 user 0m24.306s 00:11:09.565 sys 0m1.683s 00:11:09.565 18:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:09.565 18:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.565 ************************************ 00:11:09.565 END TEST raid_state_function_test_sb 00:11:09.565 ************************************ 00:11:09.565 18:25:54 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:09.565 18:25:54 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:11:09.565 18:25:54 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:09.565 18:25:54 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:09.565 18:25:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:09.565 ************************************ 00:11:09.565 START TEST raid_superblock_test 00:11:09.565 ************************************ 00:11:09.565 18:25:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 2 00:11:09.565 18:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:11:09.565 18:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:11:09.565 18:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:11:09.565 18:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:11:09.565 18:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:11:09.565 18:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:11:09.565 18:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:11:09.565 18:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:11:09.565 18:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:11:09.565 18:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:11:09.565 18:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:11:09.565 18:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:11:09.565 18:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:11:09.565 18:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:11:09.565 18:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:11:09.565 18:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:11:09.565 18:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2768085 00:11:09.565 18:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2768085 /var/tmp/spdk-raid.sock 00:11:09.565 18:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:11:09.565 18:25:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2768085 ']' 00:11:09.565 18:25:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:09.565 18:25:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:09.565 18:25:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:09.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:09.565 18:25:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:09.565 18:25:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.565 [2024-07-15 18:25:54.992855] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:11:09.565 [2024-07-15 18:25:54.992915] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2768085 ] 00:11:09.565 [2024-07-15 18:25:55.091070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.823 [2024-07-15 18:25:55.188470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.823 [2024-07-15 18:25:55.248657] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.823 [2024-07-15 18:25:55.248693] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.082 18:25:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:10.082 18:25:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:11:10.082 18:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:11:10.082 18:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:10.082 18:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:11:10.082 18:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:11:10.082 18:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:10.082 18:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:10.082 18:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:10.082 18:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:10.082 18:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:11:10.340 malloc1 00:11:10.340 18:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:10.907 [2024-07-15 18:25:56.173442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:10.907 [2024-07-15 18:25:56.173487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.907 [2024-07-15 18:25:56.173505] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc9be20 00:11:10.907 [2024-07-15 18:25:56.173514] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.907 [2024-07-15 18:25:56.175251] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.907 [2024-07-15 18:25:56.175285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:10.907 pt1 00:11:10.907 18:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:10.907 18:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:10.907 18:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:11:10.907 18:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:11:10.907 18:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:10.908 18:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:10.908 18:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:10.908 18:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:10.908 18:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:11:10.908 malloc2 00:11:11.166 18:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:11.424 [2024-07-15 18:25:56.932301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:11.424 [2024-07-15 18:25:56.932344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.424 [2024-07-15 18:25:56.932364] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe45ed0 00:11:11.424 [2024-07-15 18:25:56.932374] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.424 [2024-07-15 18:25:56.933966] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.424 [2024-07-15 18:25:56.933992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:11.424 pt2 00:11:11.424 18:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:11.424 18:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:11.424 18:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:11:11.682 [2024-07-15 18:25:57.197028] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:11.682 [2024-07-15 18:25:57.198653] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:11.682 [2024-07-15 18:25:57.198803] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xe45170 00:11:11.682 [2024-07-15 18:25:57.198815] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:11.682 [2024-07-15 18:25:57.199025] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc9b750 00:11:11.682 [2024-07-15 18:25:57.199171] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe45170 00:11:11.682 [2024-07-15 18:25:57.199180] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xe45170 00:11:11.682 [2024-07-15 18:25:57.199280] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.682 18:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:11.682 18:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:11.682 18:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:11.682 18:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:11.682 18:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:11.682 18:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:11.682 18:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:11.682 18:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:11.682 18:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:11.682 18:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:11.682 18:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:11.682 18:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.940 18:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:11.940 "name": "raid_bdev1", 00:11:11.940 "uuid": "40b29d13-f185-489e-bda6-52f9ff50d961", 00:11:11.940 "strip_size_kb": 64, 00:11:11.940 "state": "online", 00:11:11.940 "raid_level": "concat", 00:11:11.940 "superblock": true, 00:11:11.940 "num_base_bdevs": 2, 00:11:11.940 "num_base_bdevs_discovered": 2, 00:11:11.940 "num_base_bdevs_operational": 2, 00:11:11.940 "base_bdevs_list": [ 00:11:11.940 { 00:11:11.940 "name": "pt1", 00:11:11.940 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:11.940 "is_configured": true, 00:11:11.940 "data_offset": 2048, 00:11:11.940 "data_size": 63488 00:11:11.940 }, 00:11:11.940 { 00:11:11.940 "name": "pt2", 00:11:11.940 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:11.940 "is_configured": true, 00:11:11.940 "data_offset": 2048, 00:11:11.940 "data_size": 63488 00:11:11.940 } 00:11:11.940 ] 00:11:11.940 }' 00:11:11.940 18:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:11.940 18:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.875 18:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:11:12.875 18:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:12.875 18:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:12.875 18:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:12.875 18:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:12.875 18:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:12.875 18:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:12.875 18:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:12.875 [2024-07-15 18:25:58.344345] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:12.875 18:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:12.875 "name": "raid_bdev1", 00:11:12.875 "aliases": [ 00:11:12.875 "40b29d13-f185-489e-bda6-52f9ff50d961" 00:11:12.875 ], 00:11:12.875 "product_name": "Raid Volume", 00:11:12.875 "block_size": 512, 00:11:12.875 "num_blocks": 126976, 00:11:12.875 "uuid": "40b29d13-f185-489e-bda6-52f9ff50d961", 00:11:12.875 "assigned_rate_limits": { 00:11:12.875 "rw_ios_per_sec": 0, 00:11:12.875 "rw_mbytes_per_sec": 0, 00:11:12.875 "r_mbytes_per_sec": 0, 00:11:12.875 "w_mbytes_per_sec": 0 00:11:12.875 }, 00:11:12.875 "claimed": false, 00:11:12.875 "zoned": false, 00:11:12.875 "supported_io_types": { 00:11:12.875 "read": true, 00:11:12.875 "write": true, 00:11:12.875 "unmap": true, 00:11:12.875 "flush": true, 00:11:12.875 "reset": true, 00:11:12.875 "nvme_admin": false, 00:11:12.875 "nvme_io": false, 00:11:12.875 "nvme_io_md": false, 00:11:12.875 "write_zeroes": true, 00:11:12.875 "zcopy": false, 00:11:12.875 "get_zone_info": false, 00:11:12.875 "zone_management": false, 00:11:12.875 "zone_append": false, 00:11:12.875 "compare": false, 00:11:12.875 "compare_and_write": false, 00:11:12.875 "abort": false, 00:11:12.875 "seek_hole": false, 00:11:12.875 "seek_data": false, 00:11:12.875 "copy": false, 00:11:12.875 "nvme_iov_md": false 00:11:12.875 }, 00:11:12.875 "memory_domains": [ 00:11:12.875 { 00:11:12.875 "dma_device_id": "system", 00:11:12.875 "dma_device_type": 1 00:11:12.875 }, 00:11:12.875 { 00:11:12.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.875 "dma_device_type": 2 00:11:12.875 }, 00:11:12.875 { 00:11:12.875 "dma_device_id": "system", 00:11:12.875 "dma_device_type": 1 00:11:12.875 }, 00:11:12.875 { 00:11:12.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.875 "dma_device_type": 2 00:11:12.875 } 00:11:12.875 ], 00:11:12.875 "driver_specific": { 00:11:12.875 "raid": { 00:11:12.875 "uuid": "40b29d13-f185-489e-bda6-52f9ff50d961", 00:11:12.875 "strip_size_kb": 64, 00:11:12.875 "state": "online", 00:11:12.875 "raid_level": "concat", 00:11:12.875 "superblock": true, 00:11:12.875 "num_base_bdevs": 2, 00:11:12.875 "num_base_bdevs_discovered": 2, 00:11:12.875 "num_base_bdevs_operational": 2, 00:11:12.875 "base_bdevs_list": [ 00:11:12.875 { 00:11:12.875 "name": "pt1", 00:11:12.875 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:12.875 "is_configured": true, 00:11:12.875 "data_offset": 2048, 00:11:12.875 "data_size": 63488 00:11:12.875 }, 00:11:12.875 { 00:11:12.875 "name": "pt2", 00:11:12.875 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:12.875 "is_configured": true, 00:11:12.875 "data_offset": 2048, 00:11:12.875 "data_size": 63488 00:11:12.875 } 00:11:12.875 ] 00:11:12.875 } 00:11:12.875 } 00:11:12.875 }' 00:11:12.875 18:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:12.875 18:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:12.875 pt2' 00:11:12.875 18:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:12.875 18:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:12.875 18:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:13.443 18:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:13.443 "name": "pt1", 00:11:13.443 "aliases": [ 00:11:13.443 "00000000-0000-0000-0000-000000000001" 00:11:13.443 ], 00:11:13.443 "product_name": "passthru", 00:11:13.443 "block_size": 512, 00:11:13.443 "num_blocks": 65536, 00:11:13.443 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:13.443 "assigned_rate_limits": { 00:11:13.443 "rw_ios_per_sec": 0, 00:11:13.443 "rw_mbytes_per_sec": 0, 00:11:13.443 "r_mbytes_per_sec": 0, 00:11:13.443 "w_mbytes_per_sec": 0 00:11:13.443 }, 00:11:13.443 "claimed": true, 00:11:13.443 "claim_type": "exclusive_write", 00:11:13.443 "zoned": false, 00:11:13.443 "supported_io_types": { 00:11:13.443 "read": true, 00:11:13.443 "write": true, 00:11:13.443 "unmap": true, 00:11:13.443 "flush": true, 00:11:13.443 "reset": true, 00:11:13.443 "nvme_admin": false, 00:11:13.443 "nvme_io": false, 00:11:13.443 "nvme_io_md": false, 00:11:13.443 "write_zeroes": true, 00:11:13.443 "zcopy": true, 00:11:13.443 "get_zone_info": false, 00:11:13.443 "zone_management": false, 00:11:13.443 "zone_append": false, 00:11:13.443 "compare": false, 00:11:13.443 "compare_and_write": false, 00:11:13.443 "abort": true, 00:11:13.443 "seek_hole": false, 00:11:13.443 "seek_data": false, 00:11:13.443 "copy": true, 00:11:13.443 "nvme_iov_md": false 00:11:13.443 }, 00:11:13.443 "memory_domains": [ 00:11:13.443 { 00:11:13.443 "dma_device_id": "system", 00:11:13.443 "dma_device_type": 1 00:11:13.443 }, 00:11:13.443 { 00:11:13.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.443 "dma_device_type": 2 00:11:13.443 } 00:11:13.443 ], 00:11:13.443 "driver_specific": { 00:11:13.443 "passthru": { 00:11:13.443 "name": "pt1", 00:11:13.443 "base_bdev_name": "malloc1" 00:11:13.443 } 00:11:13.443 } 00:11:13.443 }' 00:11:13.443 18:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:13.443 18:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:13.701 18:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:13.701 18:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:13.701 18:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:13.701 18:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:13.701 18:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:13.701 18:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:13.960 18:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:13.960 18:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:13.960 18:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:13.960 18:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:13.960 18:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:13.960 18:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:13.960 18:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:14.218 18:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:14.218 "name": "pt2", 00:11:14.218 "aliases": [ 00:11:14.218 "00000000-0000-0000-0000-000000000002" 00:11:14.218 ], 00:11:14.218 "product_name": "passthru", 00:11:14.218 "block_size": 512, 00:11:14.218 "num_blocks": 65536, 00:11:14.218 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:14.218 "assigned_rate_limits": { 00:11:14.218 "rw_ios_per_sec": 0, 00:11:14.218 "rw_mbytes_per_sec": 0, 00:11:14.218 "r_mbytes_per_sec": 0, 00:11:14.218 "w_mbytes_per_sec": 0 00:11:14.218 }, 00:11:14.218 "claimed": true, 00:11:14.218 "claim_type": "exclusive_write", 00:11:14.218 "zoned": false, 00:11:14.218 "supported_io_types": { 00:11:14.218 "read": true, 00:11:14.218 "write": true, 00:11:14.218 "unmap": true, 00:11:14.218 "flush": true, 00:11:14.218 "reset": true, 00:11:14.218 "nvme_admin": false, 00:11:14.218 "nvme_io": false, 00:11:14.218 "nvme_io_md": false, 00:11:14.218 "write_zeroes": true, 00:11:14.218 "zcopy": true, 00:11:14.218 "get_zone_info": false, 00:11:14.218 "zone_management": false, 00:11:14.218 "zone_append": false, 00:11:14.218 "compare": false, 00:11:14.218 "compare_and_write": false, 00:11:14.218 "abort": true, 00:11:14.218 "seek_hole": false, 00:11:14.218 "seek_data": false, 00:11:14.218 "copy": true, 00:11:14.219 "nvme_iov_md": false 00:11:14.219 }, 00:11:14.219 "memory_domains": [ 00:11:14.219 { 00:11:14.219 "dma_device_id": "system", 00:11:14.219 "dma_device_type": 1 00:11:14.219 }, 00:11:14.219 { 00:11:14.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.219 "dma_device_type": 2 00:11:14.219 } 00:11:14.219 ], 00:11:14.219 "driver_specific": { 00:11:14.219 "passthru": { 00:11:14.219 "name": "pt2", 00:11:14.219 "base_bdev_name": "malloc2" 00:11:14.219 } 00:11:14.219 } 00:11:14.219 }' 00:11:14.219 18:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:14.219 18:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:14.219 18:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:14.219 18:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:14.219 18:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:14.478 18:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:14.478 18:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:14.478 18:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:14.478 18:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:14.478 18:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:14.478 18:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:14.478 18:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:14.478 18:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:11:14.478 18:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:15.062 [2024-07-15 18:26:00.442042] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:15.062 18:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=40b29d13-f185-489e-bda6-52f9ff50d961 00:11:15.062 18:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 40b29d13-f185-489e-bda6-52f9ff50d961 ']' 00:11:15.062 18:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:15.320 [2024-07-15 18:26:00.710478] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:15.320 [2024-07-15 18:26:00.710500] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:15.320 [2024-07-15 18:26:00.710553] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:15.320 [2024-07-15 18:26:00.710595] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:15.320 [2024-07-15 18:26:00.710604] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe45170 name raid_bdev1, state offline 00:11:15.320 18:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:15.320 18:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:11:15.579 18:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:11:15.579 18:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:11:15.579 18:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:11:15.579 18:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:16.147 18:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:11:16.147 18:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:16.713 18:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:11:16.713 18:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:16.713 18:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:11:16.713 18:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:11:16.713 18:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:11:16.713 18:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:11:16.713 18:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:16.713 18:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:16.713 18:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:16.713 18:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:16.713 18:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:16.713 18:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:16.713 18:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:16.713 18:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:11:16.713 18:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:11:16.971 [2024-07-15 18:26:02.479161] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:16.971 [2024-07-15 18:26:02.480575] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:16.971 [2024-07-15 18:26:02.480629] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:16.971 [2024-07-15 18:26:02.480665] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:16.971 [2024-07-15 18:26:02.480680] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:16.971 [2024-07-15 18:26:02.480687] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe46900 name raid_bdev1, state configuring 00:11:16.971 request: 00:11:16.971 { 00:11:16.971 "name": "raid_bdev1", 00:11:16.971 "raid_level": "concat", 00:11:16.971 "base_bdevs": [ 00:11:16.971 "malloc1", 00:11:16.971 "malloc2" 00:11:16.971 ], 00:11:16.971 "strip_size_kb": 64, 00:11:16.971 "superblock": false, 00:11:16.971 "method": "bdev_raid_create", 00:11:16.971 "req_id": 1 00:11:16.971 } 00:11:16.971 Got JSON-RPC error response 00:11:16.971 response: 00:11:16.971 { 00:11:16.971 "code": -17, 00:11:16.971 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:16.971 } 00:11:16.971 18:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:11:16.971 18:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:16.971 18:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:16.971 18:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:16.971 18:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:16.971 18:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:11:17.539 18:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:11:17.539 18:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:11:17.539 18:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:18.106 [2024-07-15 18:26:03.461716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:18.106 [2024-07-15 18:26:03.461765] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.106 [2024-07-15 18:26:03.461781] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc9ac10 00:11:18.106 [2024-07-15 18:26:03.461791] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.106 [2024-07-15 18:26:03.463466] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.106 [2024-07-15 18:26:03.463493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:18.106 [2024-07-15 18:26:03.463557] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:18.106 [2024-07-15 18:26:03.463581] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:18.106 pt1 00:11:18.106 18:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:11:18.106 18:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:18.106 18:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:18.106 18:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:18.106 18:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:18.106 18:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:18.106 18:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:18.106 18:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:18.106 18:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:18.106 18:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:18.106 18:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:18.106 18:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.364 18:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:18.364 "name": "raid_bdev1", 00:11:18.364 "uuid": "40b29d13-f185-489e-bda6-52f9ff50d961", 00:11:18.364 "strip_size_kb": 64, 00:11:18.364 "state": "configuring", 00:11:18.364 "raid_level": "concat", 00:11:18.364 "superblock": true, 00:11:18.364 "num_base_bdevs": 2, 00:11:18.364 "num_base_bdevs_discovered": 1, 00:11:18.364 "num_base_bdevs_operational": 2, 00:11:18.364 "base_bdevs_list": [ 00:11:18.365 { 00:11:18.365 "name": "pt1", 00:11:18.365 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:18.365 "is_configured": true, 00:11:18.365 "data_offset": 2048, 00:11:18.365 "data_size": 63488 00:11:18.365 }, 00:11:18.365 { 00:11:18.365 "name": null, 00:11:18.365 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:18.365 "is_configured": false, 00:11:18.365 "data_offset": 2048, 00:11:18.365 "data_size": 63488 00:11:18.365 } 00:11:18.365 ] 00:11:18.365 }' 00:11:18.365 18:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:18.365 18:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.987 18:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:11:18.987 18:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:11:18.987 18:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:11:18.987 18:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:18.987 [2024-07-15 18:26:04.524586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:18.987 [2024-07-15 18:26:04.524634] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.987 [2024-07-15 18:26:04.524650] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc9c050 00:11:18.987 [2024-07-15 18:26:04.524659] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.987 [2024-07-15 18:26:04.525009] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.987 [2024-07-15 18:26:04.525026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:18.987 [2024-07-15 18:26:04.525085] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:18.987 [2024-07-15 18:26:04.525102] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:18.987 [2024-07-15 18:26:04.525198] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xc9b0c0 00:11:18.987 [2024-07-15 18:26:04.525207] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:18.987 [2024-07-15 18:26:04.525385] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe4a090 00:11:18.987 [2024-07-15 18:26:04.525509] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xc9b0c0 00:11:18.987 [2024-07-15 18:26:04.525517] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xc9b0c0 00:11:18.987 [2024-07-15 18:26:04.525616] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.987 pt2 00:11:19.246 18:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:11:19.246 18:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:11:19.246 18:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:19.246 18:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:19.246 18:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:19.246 18:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:19.246 18:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:19.246 18:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:19.246 18:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:19.246 18:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:19.246 18:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:19.246 18:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:19.246 18:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:19.246 18:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.504 18:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:19.504 "name": "raid_bdev1", 00:11:19.504 "uuid": "40b29d13-f185-489e-bda6-52f9ff50d961", 00:11:19.504 "strip_size_kb": 64, 00:11:19.504 "state": "online", 00:11:19.504 "raid_level": "concat", 00:11:19.504 "superblock": true, 00:11:19.504 "num_base_bdevs": 2, 00:11:19.504 "num_base_bdevs_discovered": 2, 00:11:19.504 "num_base_bdevs_operational": 2, 00:11:19.504 "base_bdevs_list": [ 00:11:19.504 { 00:11:19.504 "name": "pt1", 00:11:19.504 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:19.504 "is_configured": true, 00:11:19.504 "data_offset": 2048, 00:11:19.504 "data_size": 63488 00:11:19.504 }, 00:11:19.504 { 00:11:19.504 "name": "pt2", 00:11:19.504 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:19.504 "is_configured": true, 00:11:19.504 "data_offset": 2048, 00:11:19.504 "data_size": 63488 00:11:19.504 } 00:11:19.504 ] 00:11:19.504 }' 00:11:19.504 18:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:19.504 18:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.071 18:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:11:20.071 18:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:20.071 18:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:20.071 18:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:20.071 18:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:20.071 18:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:20.071 18:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:20.071 18:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:20.331 [2024-07-15 18:26:05.675962] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:20.331 18:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:20.331 "name": "raid_bdev1", 00:11:20.331 "aliases": [ 00:11:20.331 "40b29d13-f185-489e-bda6-52f9ff50d961" 00:11:20.331 ], 00:11:20.331 "product_name": "Raid Volume", 00:11:20.331 "block_size": 512, 00:11:20.331 "num_blocks": 126976, 00:11:20.331 "uuid": "40b29d13-f185-489e-bda6-52f9ff50d961", 00:11:20.331 "assigned_rate_limits": { 00:11:20.331 "rw_ios_per_sec": 0, 00:11:20.331 "rw_mbytes_per_sec": 0, 00:11:20.331 "r_mbytes_per_sec": 0, 00:11:20.331 "w_mbytes_per_sec": 0 00:11:20.331 }, 00:11:20.331 "claimed": false, 00:11:20.331 "zoned": false, 00:11:20.331 "supported_io_types": { 00:11:20.331 "read": true, 00:11:20.331 "write": true, 00:11:20.331 "unmap": true, 00:11:20.331 "flush": true, 00:11:20.331 "reset": true, 00:11:20.331 "nvme_admin": false, 00:11:20.331 "nvme_io": false, 00:11:20.331 "nvme_io_md": false, 00:11:20.331 "write_zeroes": true, 00:11:20.331 "zcopy": false, 00:11:20.331 "get_zone_info": false, 00:11:20.331 "zone_management": false, 00:11:20.331 "zone_append": false, 00:11:20.331 "compare": false, 00:11:20.331 "compare_and_write": false, 00:11:20.331 "abort": false, 00:11:20.331 "seek_hole": false, 00:11:20.331 "seek_data": false, 00:11:20.331 "copy": false, 00:11:20.331 "nvme_iov_md": false 00:11:20.331 }, 00:11:20.331 "memory_domains": [ 00:11:20.331 { 00:11:20.331 "dma_device_id": "system", 00:11:20.331 "dma_device_type": 1 00:11:20.331 }, 00:11:20.331 { 00:11:20.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.331 "dma_device_type": 2 00:11:20.331 }, 00:11:20.331 { 00:11:20.331 "dma_device_id": "system", 00:11:20.331 "dma_device_type": 1 00:11:20.331 }, 00:11:20.331 { 00:11:20.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.331 "dma_device_type": 2 00:11:20.331 } 00:11:20.331 ], 00:11:20.331 "driver_specific": { 00:11:20.331 "raid": { 00:11:20.331 "uuid": "40b29d13-f185-489e-bda6-52f9ff50d961", 00:11:20.331 "strip_size_kb": 64, 00:11:20.331 "state": "online", 00:11:20.331 "raid_level": "concat", 00:11:20.331 "superblock": true, 00:11:20.331 "num_base_bdevs": 2, 00:11:20.331 "num_base_bdevs_discovered": 2, 00:11:20.331 "num_base_bdevs_operational": 2, 00:11:20.331 "base_bdevs_list": [ 00:11:20.331 { 00:11:20.331 "name": "pt1", 00:11:20.331 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:20.331 "is_configured": true, 00:11:20.331 "data_offset": 2048, 00:11:20.331 "data_size": 63488 00:11:20.331 }, 00:11:20.331 { 00:11:20.331 "name": "pt2", 00:11:20.331 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:20.331 "is_configured": true, 00:11:20.331 "data_offset": 2048, 00:11:20.331 "data_size": 63488 00:11:20.331 } 00:11:20.331 ] 00:11:20.331 } 00:11:20.331 } 00:11:20.331 }' 00:11:20.331 18:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:20.331 18:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:20.331 pt2' 00:11:20.331 18:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:20.331 18:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:20.331 18:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:20.590 18:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:20.590 "name": "pt1", 00:11:20.590 "aliases": [ 00:11:20.590 "00000000-0000-0000-0000-000000000001" 00:11:20.590 ], 00:11:20.590 "product_name": "passthru", 00:11:20.590 "block_size": 512, 00:11:20.590 "num_blocks": 65536, 00:11:20.590 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:20.590 "assigned_rate_limits": { 00:11:20.590 "rw_ios_per_sec": 0, 00:11:20.590 "rw_mbytes_per_sec": 0, 00:11:20.590 "r_mbytes_per_sec": 0, 00:11:20.590 "w_mbytes_per_sec": 0 00:11:20.590 }, 00:11:20.590 "claimed": true, 00:11:20.590 "claim_type": "exclusive_write", 00:11:20.590 "zoned": false, 00:11:20.590 "supported_io_types": { 00:11:20.590 "read": true, 00:11:20.590 "write": true, 00:11:20.590 "unmap": true, 00:11:20.590 "flush": true, 00:11:20.590 "reset": true, 00:11:20.590 "nvme_admin": false, 00:11:20.590 "nvme_io": false, 00:11:20.590 "nvme_io_md": false, 00:11:20.590 "write_zeroes": true, 00:11:20.590 "zcopy": true, 00:11:20.590 "get_zone_info": false, 00:11:20.590 "zone_management": false, 00:11:20.590 "zone_append": false, 00:11:20.590 "compare": false, 00:11:20.590 "compare_and_write": false, 00:11:20.590 "abort": true, 00:11:20.590 "seek_hole": false, 00:11:20.590 "seek_data": false, 00:11:20.590 "copy": true, 00:11:20.590 "nvme_iov_md": false 00:11:20.590 }, 00:11:20.590 "memory_domains": [ 00:11:20.590 { 00:11:20.590 "dma_device_id": "system", 00:11:20.590 "dma_device_type": 1 00:11:20.590 }, 00:11:20.590 { 00:11:20.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.590 "dma_device_type": 2 00:11:20.590 } 00:11:20.590 ], 00:11:20.590 "driver_specific": { 00:11:20.590 "passthru": { 00:11:20.590 "name": "pt1", 00:11:20.590 "base_bdev_name": "malloc1" 00:11:20.590 } 00:11:20.590 } 00:11:20.590 }' 00:11:20.590 18:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:20.590 18:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:20.590 18:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:20.590 18:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:20.849 18:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:20.849 18:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:20.849 18:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:20.849 18:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:20.849 18:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:20.849 18:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:20.849 18:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:20.849 18:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:20.849 18:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:20.849 18:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:20.849 18:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:21.108 18:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:21.108 "name": "pt2", 00:11:21.108 "aliases": [ 00:11:21.108 "00000000-0000-0000-0000-000000000002" 00:11:21.108 ], 00:11:21.108 "product_name": "passthru", 00:11:21.108 "block_size": 512, 00:11:21.108 "num_blocks": 65536, 00:11:21.108 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:21.108 "assigned_rate_limits": { 00:11:21.108 "rw_ios_per_sec": 0, 00:11:21.108 "rw_mbytes_per_sec": 0, 00:11:21.108 "r_mbytes_per_sec": 0, 00:11:21.108 "w_mbytes_per_sec": 0 00:11:21.108 }, 00:11:21.108 "claimed": true, 00:11:21.108 "claim_type": "exclusive_write", 00:11:21.108 "zoned": false, 00:11:21.108 "supported_io_types": { 00:11:21.108 "read": true, 00:11:21.108 "write": true, 00:11:21.108 "unmap": true, 00:11:21.108 "flush": true, 00:11:21.108 "reset": true, 00:11:21.108 "nvme_admin": false, 00:11:21.108 "nvme_io": false, 00:11:21.108 "nvme_io_md": false, 00:11:21.108 "write_zeroes": true, 00:11:21.108 "zcopy": true, 00:11:21.108 "get_zone_info": false, 00:11:21.108 "zone_management": false, 00:11:21.108 "zone_append": false, 00:11:21.108 "compare": false, 00:11:21.108 "compare_and_write": false, 00:11:21.108 "abort": true, 00:11:21.108 "seek_hole": false, 00:11:21.108 "seek_data": false, 00:11:21.108 "copy": true, 00:11:21.108 "nvme_iov_md": false 00:11:21.108 }, 00:11:21.108 "memory_domains": [ 00:11:21.108 { 00:11:21.108 "dma_device_id": "system", 00:11:21.108 "dma_device_type": 1 00:11:21.108 }, 00:11:21.108 { 00:11:21.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.108 "dma_device_type": 2 00:11:21.108 } 00:11:21.108 ], 00:11:21.108 "driver_specific": { 00:11:21.108 "passthru": { 00:11:21.108 "name": "pt2", 00:11:21.108 "base_bdev_name": "malloc2" 00:11:21.108 } 00:11:21.108 } 00:11:21.108 }' 00:11:21.108 18:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:21.367 18:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:21.367 18:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:21.367 18:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:21.367 18:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:21.367 18:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:21.367 18:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:21.367 18:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:21.367 18:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:21.367 18:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:21.626 18:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:21.626 18:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:21.626 18:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:21.626 18:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:11:21.893 [2024-07-15 18:26:07.208105] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:21.894 18:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 40b29d13-f185-489e-bda6-52f9ff50d961 '!=' 40b29d13-f185-489e-bda6-52f9ff50d961 ']' 00:11:21.894 18:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:11:21.894 18:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:21.894 18:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:21.894 18:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2768085 00:11:21.894 18:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2768085 ']' 00:11:21.894 18:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2768085 00:11:21.894 18:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:11:21.894 18:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:21.894 18:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2768085 00:11:21.894 18:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:21.894 18:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:21.894 18:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2768085' 00:11:21.894 killing process with pid 2768085 00:11:21.894 18:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2768085 00:11:21.894 [2024-07-15 18:26:07.274962] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:21.894 [2024-07-15 18:26:07.275017] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:21.894 [2024-07-15 18:26:07.275061] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:21.894 [2024-07-15 18:26:07.275069] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc9b0c0 name raid_bdev1, state offline 00:11:21.894 18:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2768085 00:11:21.894 [2024-07-15 18:26:07.291828] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:22.157 18:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:11:22.157 00:11:22.157 real 0m12.559s 00:11:22.157 user 0m23.603s 00:11:22.157 sys 0m1.736s 00:11:22.157 18:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:22.157 18:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.157 ************************************ 00:11:22.157 END TEST raid_superblock_test 00:11:22.157 ************************************ 00:11:22.157 18:26:07 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:22.157 18:26:07 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:11:22.157 18:26:07 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:22.157 18:26:07 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:22.157 18:26:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:22.157 ************************************ 00:11:22.157 START TEST raid_read_error_test 00:11:22.157 ************************************ 00:11:22.157 18:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 read 00:11:22.157 18:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:11:22.157 18:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:11:22.157 18:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:11:22.157 18:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:11:22.157 18:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:22.157 18:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:11:22.157 18:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:22.157 18:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:22.157 18:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:11:22.157 18:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:22.157 18:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:22.157 18:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:22.157 18:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:11:22.157 18:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:11:22.157 18:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:11:22.157 18:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:11:22.157 18:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:11:22.157 18:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:11:22.157 18:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:11:22.157 18:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:11:22.157 18:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:11:22.157 18:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:11:22.157 18:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.WcjYdPb2e9 00:11:22.157 18:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2770413 00:11:22.157 18:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2770413 /var/tmp/spdk-raid.sock 00:11:22.157 18:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:22.157 18:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2770413 ']' 00:11:22.157 18:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:22.157 18:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:22.157 18:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:22.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:22.158 18:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:22.158 18:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.158 [2024-07-15 18:26:07.608045] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:11:22.158 [2024-07-15 18:26:07.608108] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2770413 ] 00:11:22.158 [2024-07-15 18:26:07.706108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.416 [2024-07-15 18:26:07.801543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.416 [2024-07-15 18:26:07.862378] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:22.416 [2024-07-15 18:26:07.862411] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.351 18:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:23.351 18:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:11:23.351 18:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:23.351 18:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:23.351 BaseBdev1_malloc 00:11:23.351 18:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:23.609 true 00:11:23.609 18:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:23.866 [2024-07-15 18:26:09.281188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:23.867 [2024-07-15 18:26:09.281228] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.867 [2024-07-15 18:26:09.281245] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1133d20 00:11:23.867 [2024-07-15 18:26:09.281255] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.867 [2024-07-15 18:26:09.283018] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.867 [2024-07-15 18:26:09.283045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:23.867 BaseBdev1 00:11:23.867 18:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:23.867 18:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:24.125 BaseBdev2_malloc 00:11:24.125 18:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:24.384 true 00:11:24.384 18:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:24.642 [2024-07-15 18:26:10.055857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:24.642 [2024-07-15 18:26:10.055899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.642 [2024-07-15 18:26:10.055917] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1138d50 00:11:24.642 [2024-07-15 18:26:10.055927] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.642 [2024-07-15 18:26:10.057600] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.642 [2024-07-15 18:26:10.057626] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:24.642 BaseBdev2 00:11:24.642 18:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:11:24.900 [2024-07-15 18:26:10.312570] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:24.900 [2024-07-15 18:26:10.313943] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:24.900 [2024-07-15 18:26:10.314139] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x113a0e0 00:11:24.900 [2024-07-15 18:26:10.314151] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:24.900 [2024-07-15 18:26:10.314350] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11427d0 00:11:24.900 [2024-07-15 18:26:10.314508] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x113a0e0 00:11:24.900 [2024-07-15 18:26:10.314517] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x113a0e0 00:11:24.900 [2024-07-15 18:26:10.314630] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.900 18:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:24.900 18:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:24.900 18:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:24.900 18:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:24.900 18:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:24.900 18:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:24.900 18:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:24.900 18:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:24.901 18:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:24.901 18:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:24.901 18:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:24.901 18:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.158 18:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:25.158 "name": "raid_bdev1", 00:11:25.158 "uuid": "f3c13fc1-ca37-4d0a-a7f0-a9e8414607db", 00:11:25.158 "strip_size_kb": 64, 00:11:25.158 "state": "online", 00:11:25.158 "raid_level": "concat", 00:11:25.158 "superblock": true, 00:11:25.158 "num_base_bdevs": 2, 00:11:25.158 "num_base_bdevs_discovered": 2, 00:11:25.158 "num_base_bdevs_operational": 2, 00:11:25.158 "base_bdevs_list": [ 00:11:25.158 { 00:11:25.158 "name": "BaseBdev1", 00:11:25.158 "uuid": "cab9e454-f263-5b01-88df-3999dc348578", 00:11:25.159 "is_configured": true, 00:11:25.159 "data_offset": 2048, 00:11:25.159 "data_size": 63488 00:11:25.159 }, 00:11:25.159 { 00:11:25.159 "name": "BaseBdev2", 00:11:25.159 "uuid": "4e5e7483-6780-5b5d-a3bc-ec18d6e69491", 00:11:25.159 "is_configured": true, 00:11:25.159 "data_offset": 2048, 00:11:25.159 "data_size": 63488 00:11:25.159 } 00:11:25.159 ] 00:11:25.159 }' 00:11:25.159 18:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:25.159 18:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.724 18:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:11:25.724 18:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:25.983 [2024-07-15 18:26:11.331573] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1135ac0 00:11:26.914 18:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:27.172 18:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:11:27.172 18:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:11:27.172 18:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:11:27.172 18:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:27.172 18:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:27.172 18:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:27.172 18:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:27.172 18:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:27.172 18:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:27.172 18:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:27.172 18:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:27.172 18:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:27.172 18:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:27.172 18:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:27.172 18:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.430 18:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:27.430 "name": "raid_bdev1", 00:11:27.430 "uuid": "f3c13fc1-ca37-4d0a-a7f0-a9e8414607db", 00:11:27.430 "strip_size_kb": 64, 00:11:27.430 "state": "online", 00:11:27.430 "raid_level": "concat", 00:11:27.430 "superblock": true, 00:11:27.430 "num_base_bdevs": 2, 00:11:27.430 "num_base_bdevs_discovered": 2, 00:11:27.430 "num_base_bdevs_operational": 2, 00:11:27.430 "base_bdevs_list": [ 00:11:27.430 { 00:11:27.430 "name": "BaseBdev1", 00:11:27.430 "uuid": "cab9e454-f263-5b01-88df-3999dc348578", 00:11:27.430 "is_configured": true, 00:11:27.430 "data_offset": 2048, 00:11:27.430 "data_size": 63488 00:11:27.430 }, 00:11:27.430 { 00:11:27.430 "name": "BaseBdev2", 00:11:27.430 "uuid": "4e5e7483-6780-5b5d-a3bc-ec18d6e69491", 00:11:27.430 "is_configured": true, 00:11:27.430 "data_offset": 2048, 00:11:27.430 "data_size": 63488 00:11:27.430 } 00:11:27.430 ] 00:11:27.430 }' 00:11:27.430 18:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:27.430 18:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.996 18:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:28.255 [2024-07-15 18:26:13.623123] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:28.255 [2024-07-15 18:26:13.623154] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:28.255 [2024-07-15 18:26:13.626552] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:28.255 [2024-07-15 18:26:13.626582] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.255 [2024-07-15 18:26:13.626609] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:28.255 [2024-07-15 18:26:13.626622] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x113a0e0 name raid_bdev1, state offline 00:11:28.255 0 00:11:28.255 18:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2770413 00:11:28.255 18:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2770413 ']' 00:11:28.255 18:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2770413 00:11:28.255 18:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:11:28.255 18:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:28.255 18:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2770413 00:11:28.255 18:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:28.255 18:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:28.255 18:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2770413' 00:11:28.255 killing process with pid 2770413 00:11:28.255 18:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2770413 00:11:28.255 [2024-07-15 18:26:13.701149] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:28.255 18:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2770413 00:11:28.255 [2024-07-15 18:26:13.711519] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:28.514 18:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.WcjYdPb2e9 00:11:28.514 18:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:11:28.514 18:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:11:28.514 18:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.44 00:11:28.514 18:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:11:28.514 18:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:28.514 18:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:28.514 18:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.44 != \0\.\0\0 ]] 00:11:28.514 00:11:28.514 real 0m6.384s 00:11:28.514 user 0m10.317s 00:11:28.514 sys 0m0.877s 00:11:28.514 18:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:28.514 18:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.514 ************************************ 00:11:28.514 END TEST raid_read_error_test 00:11:28.514 ************************************ 00:11:28.514 18:26:13 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:28.514 18:26:13 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:11:28.514 18:26:13 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:28.514 18:26:13 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:28.514 18:26:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:28.514 ************************************ 00:11:28.514 START TEST raid_write_error_test 00:11:28.514 ************************************ 00:11:28.514 18:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 write 00:11:28.514 18:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:11:28.514 18:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:11:28.514 18:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:11:28.514 18:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:11:28.514 18:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:28.514 18:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:11:28.514 18:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:28.514 18:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:28.514 18:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:11:28.514 18:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:28.514 18:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:28.514 18:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:28.514 18:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:11:28.514 18:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:11:28.514 18:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:11:28.514 18:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:11:28.514 18:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:11:28.514 18:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:11:28.514 18:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:11:28.514 18:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:11:28.514 18:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:11:28.514 18:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:11:28.514 18:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.yb8JC0xBO1 00:11:28.514 18:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2771370 00:11:28.514 18:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2771370 /var/tmp/spdk-raid.sock 00:11:28.514 18:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:28.514 18:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2771370 ']' 00:11:28.514 18:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:28.514 18:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:28.514 18:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:28.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:28.514 18:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:28.514 18:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.514 [2024-07-15 18:26:14.033716] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:11:28.514 [2024-07-15 18:26:14.033778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2771370 ] 00:11:28.773 [2024-07-15 18:26:14.125461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.773 [2024-07-15 18:26:14.217723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.773 [2024-07-15 18:26:14.286570] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:28.773 [2024-07-15 18:26:14.286604] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:29.708 18:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:29.708 18:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:11:29.708 18:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:29.708 18:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:29.967 BaseBdev1_malloc 00:11:29.967 18:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:30.225 true 00:11:30.225 18:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:30.792 [2024-07-15 18:26:16.171235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:30.792 [2024-07-15 18:26:16.171276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.792 [2024-07-15 18:26:16.171293] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e3bd20 00:11:30.792 [2024-07-15 18:26:16.171302] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.792 [2024-07-15 18:26:16.173108] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.792 [2024-07-15 18:26:16.173136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:30.792 BaseBdev1 00:11:30.792 18:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:30.792 18:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:31.050 BaseBdev2_malloc 00:11:31.050 18:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:31.308 true 00:11:31.308 18:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:31.875 [2024-07-15 18:26:17.186399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:31.875 [2024-07-15 18:26:17.186440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.875 [2024-07-15 18:26:17.186458] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e40d50 00:11:31.875 [2024-07-15 18:26:17.186467] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.875 [2024-07-15 18:26:17.188096] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.875 [2024-07-15 18:26:17.188121] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:31.875 BaseBdev2 00:11:31.875 18:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:11:32.134 [2024-07-15 18:26:17.451173] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:32.134 [2024-07-15 18:26:17.452603] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:32.134 [2024-07-15 18:26:17.452787] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1e420e0 00:11:32.134 [2024-07-15 18:26:17.452799] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:32.134 [2024-07-15 18:26:17.453005] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e4a7d0 00:11:32.134 [2024-07-15 18:26:17.453164] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1e420e0 00:11:32.134 [2024-07-15 18:26:17.453173] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1e420e0 00:11:32.134 [2024-07-15 18:26:17.453285] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.134 18:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:32.134 18:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:32.134 18:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:32.134 18:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:32.134 18:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:32.134 18:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:32.134 18:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:32.134 18:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:32.134 18:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:32.134 18:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:32.134 18:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:32.134 18:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.392 18:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:32.392 "name": "raid_bdev1", 00:11:32.392 "uuid": "ee188161-804c-4fe2-b629-5ad781b6773a", 00:11:32.392 "strip_size_kb": 64, 00:11:32.392 "state": "online", 00:11:32.392 "raid_level": "concat", 00:11:32.392 "superblock": true, 00:11:32.392 "num_base_bdevs": 2, 00:11:32.392 "num_base_bdevs_discovered": 2, 00:11:32.392 "num_base_bdevs_operational": 2, 00:11:32.392 "base_bdevs_list": [ 00:11:32.392 { 00:11:32.392 "name": "BaseBdev1", 00:11:32.392 "uuid": "2577bb2e-38be-515c-9299-ac4fd3fa6675", 00:11:32.392 "is_configured": true, 00:11:32.392 "data_offset": 2048, 00:11:32.392 "data_size": 63488 00:11:32.392 }, 00:11:32.392 { 00:11:32.392 "name": "BaseBdev2", 00:11:32.392 "uuid": "1f78a45b-4d31-5fbb-8a6c-a60badfe047b", 00:11:32.392 "is_configured": true, 00:11:32.392 "data_offset": 2048, 00:11:32.392 "data_size": 63488 00:11:32.392 } 00:11:32.392 ] 00:11:32.392 }' 00:11:32.392 18:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:32.392 18:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.959 18:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:11:32.959 18:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:33.230 [2024-07-15 18:26:18.558413] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e3dac0 00:11:33.857 18:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:34.425 18:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:11:34.425 18:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:11:34.425 18:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:11:34.425 18:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:34.425 18:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:34.425 18:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:34.425 18:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:34.425 18:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:34.425 18:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:34.425 18:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:34.425 18:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:34.425 18:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:34.425 18:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:34.425 18:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:34.425 18:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.684 18:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:34.684 "name": "raid_bdev1", 00:11:34.684 "uuid": "ee188161-804c-4fe2-b629-5ad781b6773a", 00:11:34.684 "strip_size_kb": 64, 00:11:34.684 "state": "online", 00:11:34.684 "raid_level": "concat", 00:11:34.684 "superblock": true, 00:11:34.684 "num_base_bdevs": 2, 00:11:34.684 "num_base_bdevs_discovered": 2, 00:11:34.684 "num_base_bdevs_operational": 2, 00:11:34.684 "base_bdevs_list": [ 00:11:34.684 { 00:11:34.684 "name": "BaseBdev1", 00:11:34.684 "uuid": "2577bb2e-38be-515c-9299-ac4fd3fa6675", 00:11:34.684 "is_configured": true, 00:11:34.684 "data_offset": 2048, 00:11:34.684 "data_size": 63488 00:11:34.684 }, 00:11:34.684 { 00:11:34.684 "name": "BaseBdev2", 00:11:34.684 "uuid": "1f78a45b-4d31-5fbb-8a6c-a60badfe047b", 00:11:34.684 "is_configured": true, 00:11:34.684 "data_offset": 2048, 00:11:34.684 "data_size": 63488 00:11:34.684 } 00:11:34.684 ] 00:11:34.684 }' 00:11:34.684 18:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:34.684 18:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.251 18:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:35.509 [2024-07-15 18:26:20.982880] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:35.509 [2024-07-15 18:26:20.982921] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:35.509 [2024-07-15 18:26:20.986322] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:35.509 [2024-07-15 18:26:20.986353] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.509 [2024-07-15 18:26:20.986378] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:35.509 [2024-07-15 18:26:20.986386] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e420e0 name raid_bdev1, state offline 00:11:35.509 0 00:11:35.509 18:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2771370 00:11:35.509 18:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2771370 ']' 00:11:35.509 18:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2771370 00:11:35.509 18:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:11:35.509 18:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:35.509 18:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2771370 00:11:35.509 18:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:35.509 18:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:35.509 18:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2771370' 00:11:35.509 killing process with pid 2771370 00:11:35.509 18:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2771370 00:11:35.509 [2024-07-15 18:26:21.047138] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:35.509 18:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2771370 00:11:35.509 [2024-07-15 18:26:21.057193] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:35.768 18:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.yb8JC0xBO1 00:11:35.768 18:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:11:35.768 18:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:11:35.768 18:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.41 00:11:35.768 18:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:11:35.768 18:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:35.768 18:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:35.768 18:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.41 != \0\.\0\0 ]] 00:11:35.768 00:11:35.768 real 0m7.305s 00:11:35.768 user 0m12.168s 00:11:35.768 sys 0m0.943s 00:11:35.768 18:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:35.768 18:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.768 ************************************ 00:11:35.768 END TEST raid_write_error_test 00:11:35.768 ************************************ 00:11:35.768 18:26:21 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:35.768 18:26:21 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:11:35.768 18:26:21 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:11:35.768 18:26:21 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:35.768 18:26:21 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:35.768 18:26:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:35.768 ************************************ 00:11:35.768 START TEST raid_state_function_test 00:11:35.768 ************************************ 00:11:35.768 18:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 false 00:11:35.768 18:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:11:35.768 18:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:11:35.768 18:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:11:35.768 18:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:35.768 18:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:35.768 18:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:35.768 18:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:35.768 18:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:35.768 18:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:35.768 18:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:35.768 18:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:35.768 18:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:35.768 18:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:35.768 18:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:35.768 18:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:35.768 18:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:35.768 18:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:35.768 18:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:35.768 18:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:11:35.768 18:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:11:35.768 18:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:11:35.768 18:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:11:35.768 18:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2772733 00:11:35.768 18:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2772733' 00:11:35.768 Process raid pid: 2772733 00:11:35.768 18:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:35.768 18:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2772733 /var/tmp/spdk-raid.sock 00:11:36.027 18:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2772733 ']' 00:11:36.027 18:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:36.027 18:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:36.027 18:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:36.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:36.027 18:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:36.027 18:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.027 [2024-07-15 18:26:21.375857] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:11:36.027 [2024-07-15 18:26:21.375915] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:36.027 [2024-07-15 18:26:21.473110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.027 [2024-07-15 18:26:21.567503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.286 [2024-07-15 18:26:21.624661] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.286 [2024-07-15 18:26:21.624691] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:37.221 18:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:37.221 18:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:11:37.221 18:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:37.788 [2024-07-15 18:26:23.032765] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:37.788 [2024-07-15 18:26:23.032804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:37.788 [2024-07-15 18:26:23.032813] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:37.788 [2024-07-15 18:26:23.032822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:37.788 18:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:37.788 18:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:37.788 18:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:37.788 18:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:37.788 18:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:37.788 18:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:37.788 18:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:37.788 18:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:37.788 18:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:37.788 18:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:37.788 18:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:37.788 18:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.788 18:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:37.788 "name": "Existed_Raid", 00:11:37.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.788 "strip_size_kb": 0, 00:11:37.788 "state": "configuring", 00:11:37.788 "raid_level": "raid1", 00:11:37.788 "superblock": false, 00:11:37.788 "num_base_bdevs": 2, 00:11:37.788 "num_base_bdevs_discovered": 0, 00:11:37.788 "num_base_bdevs_operational": 2, 00:11:37.788 "base_bdevs_list": [ 00:11:37.788 { 00:11:37.788 "name": "BaseBdev1", 00:11:37.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.788 "is_configured": false, 00:11:37.788 "data_offset": 0, 00:11:37.788 "data_size": 0 00:11:37.788 }, 00:11:37.788 { 00:11:37.788 "name": "BaseBdev2", 00:11:37.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.788 "is_configured": false, 00:11:37.788 "data_offset": 0, 00:11:37.788 "data_size": 0 00:11:37.788 } 00:11:37.788 ] 00:11:37.788 }' 00:11:37.788 18:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:37.788 18:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.722 18:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:38.722 [2024-07-15 18:26:24.175692] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:38.722 [2024-07-15 18:26:24.175720] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20cbb80 name Existed_Raid, state configuring 00:11:38.723 18:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:38.981 [2024-07-15 18:26:24.440427] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:38.981 [2024-07-15 18:26:24.440457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:38.981 [2024-07-15 18:26:24.440465] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:38.981 [2024-07-15 18:26:24.440473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:38.981 18:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:39.239 [2024-07-15 18:26:24.714541] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:39.239 BaseBdev1 00:11:39.239 18:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:39.239 18:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:39.239 18:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:39.239 18:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:39.239 18:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:39.239 18:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:39.239 18:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:39.497 18:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:39.754 [ 00:11:39.754 { 00:11:39.754 "name": "BaseBdev1", 00:11:39.754 "aliases": [ 00:11:39.754 "b6a2894e-4624-4eb0-9533-ab11145681e4" 00:11:39.754 ], 00:11:39.754 "product_name": "Malloc disk", 00:11:39.754 "block_size": 512, 00:11:39.754 "num_blocks": 65536, 00:11:39.754 "uuid": "b6a2894e-4624-4eb0-9533-ab11145681e4", 00:11:39.754 "assigned_rate_limits": { 00:11:39.754 "rw_ios_per_sec": 0, 00:11:39.754 "rw_mbytes_per_sec": 0, 00:11:39.754 "r_mbytes_per_sec": 0, 00:11:39.754 "w_mbytes_per_sec": 0 00:11:39.754 }, 00:11:39.754 "claimed": true, 00:11:39.754 "claim_type": "exclusive_write", 00:11:39.754 "zoned": false, 00:11:39.754 "supported_io_types": { 00:11:39.754 "read": true, 00:11:39.754 "write": true, 00:11:39.754 "unmap": true, 00:11:39.754 "flush": true, 00:11:39.754 "reset": true, 00:11:39.754 "nvme_admin": false, 00:11:39.754 "nvme_io": false, 00:11:39.754 "nvme_io_md": false, 00:11:39.754 "write_zeroes": true, 00:11:39.754 "zcopy": true, 00:11:39.754 "get_zone_info": false, 00:11:39.754 "zone_management": false, 00:11:39.754 "zone_append": false, 00:11:39.754 "compare": false, 00:11:39.754 "compare_and_write": false, 00:11:39.754 "abort": true, 00:11:39.754 "seek_hole": false, 00:11:39.754 "seek_data": false, 00:11:39.754 "copy": true, 00:11:39.754 "nvme_iov_md": false 00:11:39.754 }, 00:11:39.754 "memory_domains": [ 00:11:39.754 { 00:11:39.754 "dma_device_id": "system", 00:11:39.754 "dma_device_type": 1 00:11:39.754 }, 00:11:39.754 { 00:11:39.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.755 "dma_device_type": 2 00:11:39.755 } 00:11:39.755 ], 00:11:39.755 "driver_specific": {} 00:11:39.755 } 00:11:39.755 ] 00:11:39.755 18:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:39.755 18:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:39.755 18:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:39.755 18:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:39.755 18:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:39.755 18:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:39.755 18:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:39.755 18:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:39.755 18:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:39.755 18:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:39.755 18:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:39.755 18:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:39.755 18:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.012 18:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:40.012 "name": "Existed_Raid", 00:11:40.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.012 "strip_size_kb": 0, 00:11:40.012 "state": "configuring", 00:11:40.012 "raid_level": "raid1", 00:11:40.012 "superblock": false, 00:11:40.012 "num_base_bdevs": 2, 00:11:40.012 "num_base_bdevs_discovered": 1, 00:11:40.012 "num_base_bdevs_operational": 2, 00:11:40.012 "base_bdevs_list": [ 00:11:40.012 { 00:11:40.012 "name": "BaseBdev1", 00:11:40.012 "uuid": "b6a2894e-4624-4eb0-9533-ab11145681e4", 00:11:40.012 "is_configured": true, 00:11:40.012 "data_offset": 0, 00:11:40.012 "data_size": 65536 00:11:40.012 }, 00:11:40.012 { 00:11:40.012 "name": "BaseBdev2", 00:11:40.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.012 "is_configured": false, 00:11:40.012 "data_offset": 0, 00:11:40.012 "data_size": 0 00:11:40.012 } 00:11:40.012 ] 00:11:40.012 }' 00:11:40.012 18:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:40.012 18:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.576 18:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:40.833 [2024-07-15 18:26:26.258803] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:40.833 [2024-07-15 18:26:26.258837] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20cb470 name Existed_Raid, state configuring 00:11:40.833 18:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:41.089 [2024-07-15 18:26:26.443508] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:41.089 [2024-07-15 18:26:26.445031] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:41.089 [2024-07-15 18:26:26.445059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:41.089 18:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:41.089 18:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:41.089 18:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:41.089 18:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:41.089 18:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:41.089 18:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:41.089 18:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:41.089 18:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:41.089 18:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:41.089 18:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:41.089 18:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:41.089 18:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:41.089 18:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:41.089 18:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.346 18:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:41.346 "name": "Existed_Raid", 00:11:41.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.346 "strip_size_kb": 0, 00:11:41.346 "state": "configuring", 00:11:41.346 "raid_level": "raid1", 00:11:41.346 "superblock": false, 00:11:41.346 "num_base_bdevs": 2, 00:11:41.346 "num_base_bdevs_discovered": 1, 00:11:41.346 "num_base_bdevs_operational": 2, 00:11:41.346 "base_bdevs_list": [ 00:11:41.346 { 00:11:41.346 "name": "BaseBdev1", 00:11:41.346 "uuid": "b6a2894e-4624-4eb0-9533-ab11145681e4", 00:11:41.346 "is_configured": true, 00:11:41.346 "data_offset": 0, 00:11:41.346 "data_size": 65536 00:11:41.346 }, 00:11:41.346 { 00:11:41.346 "name": "BaseBdev2", 00:11:41.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.346 "is_configured": false, 00:11:41.346 "data_offset": 0, 00:11:41.346 "data_size": 0 00:11:41.346 } 00:11:41.346 ] 00:11:41.346 }' 00:11:41.346 18:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:41.346 18:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.911 18:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:42.168 [2024-07-15 18:26:27.601840] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:42.168 [2024-07-15 18:26:27.601874] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x20cc260 00:11:42.168 [2024-07-15 18:26:27.601880] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:42.168 [2024-07-15 18:26:27.602088] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22754f0 00:11:42.168 [2024-07-15 18:26:27.602214] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x20cc260 00:11:42.168 [2024-07-15 18:26:27.602223] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x20cc260 00:11:42.168 [2024-07-15 18:26:27.602386] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.168 BaseBdev2 00:11:42.168 18:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:42.168 18:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:42.168 18:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:42.168 18:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:42.168 18:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:42.168 18:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:42.169 18:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:42.428 18:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:42.687 [ 00:11:42.687 { 00:11:42.687 "name": "BaseBdev2", 00:11:42.687 "aliases": [ 00:11:42.687 "de8799e8-15b2-4faa-aa3d-5c0037f3142d" 00:11:42.687 ], 00:11:42.687 "product_name": "Malloc disk", 00:11:42.687 "block_size": 512, 00:11:42.687 "num_blocks": 65536, 00:11:42.687 "uuid": "de8799e8-15b2-4faa-aa3d-5c0037f3142d", 00:11:42.687 "assigned_rate_limits": { 00:11:42.687 "rw_ios_per_sec": 0, 00:11:42.687 "rw_mbytes_per_sec": 0, 00:11:42.687 "r_mbytes_per_sec": 0, 00:11:42.687 "w_mbytes_per_sec": 0 00:11:42.687 }, 00:11:42.687 "claimed": true, 00:11:42.687 "claim_type": "exclusive_write", 00:11:42.687 "zoned": false, 00:11:42.687 "supported_io_types": { 00:11:42.687 "read": true, 00:11:42.687 "write": true, 00:11:42.687 "unmap": true, 00:11:42.687 "flush": true, 00:11:42.687 "reset": true, 00:11:42.687 "nvme_admin": false, 00:11:42.687 "nvme_io": false, 00:11:42.687 "nvme_io_md": false, 00:11:42.687 "write_zeroes": true, 00:11:42.687 "zcopy": true, 00:11:42.687 "get_zone_info": false, 00:11:42.687 "zone_management": false, 00:11:42.687 "zone_append": false, 00:11:42.687 "compare": false, 00:11:42.687 "compare_and_write": false, 00:11:42.687 "abort": true, 00:11:42.687 "seek_hole": false, 00:11:42.687 "seek_data": false, 00:11:42.687 "copy": true, 00:11:42.687 "nvme_iov_md": false 00:11:42.687 }, 00:11:42.687 "memory_domains": [ 00:11:42.687 { 00:11:42.687 "dma_device_id": "system", 00:11:42.687 "dma_device_type": 1 00:11:42.687 }, 00:11:42.687 { 00:11:42.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.687 "dma_device_type": 2 00:11:42.687 } 00:11:42.687 ], 00:11:42.687 "driver_specific": {} 00:11:42.687 } 00:11:42.687 ] 00:11:42.687 18:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:42.687 18:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:42.687 18:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:42.687 18:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:42.687 18:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:42.687 18:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:42.687 18:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:42.687 18:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:42.687 18:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:42.687 18:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:42.687 18:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:42.687 18:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:42.687 18:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:42.687 18:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:42.687 18:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.946 18:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:42.946 "name": "Existed_Raid", 00:11:42.946 "uuid": "949b5f25-8270-4a75-a9f2-623338a45215", 00:11:42.946 "strip_size_kb": 0, 00:11:42.946 "state": "online", 00:11:42.946 "raid_level": "raid1", 00:11:42.946 "superblock": false, 00:11:42.946 "num_base_bdevs": 2, 00:11:42.946 "num_base_bdevs_discovered": 2, 00:11:42.946 "num_base_bdevs_operational": 2, 00:11:42.946 "base_bdevs_list": [ 00:11:42.946 { 00:11:42.946 "name": "BaseBdev1", 00:11:42.946 "uuid": "b6a2894e-4624-4eb0-9533-ab11145681e4", 00:11:42.946 "is_configured": true, 00:11:42.946 "data_offset": 0, 00:11:42.946 "data_size": 65536 00:11:42.946 }, 00:11:42.946 { 00:11:42.946 "name": "BaseBdev2", 00:11:42.946 "uuid": "de8799e8-15b2-4faa-aa3d-5c0037f3142d", 00:11:42.946 "is_configured": true, 00:11:42.946 "data_offset": 0, 00:11:42.946 "data_size": 65536 00:11:42.946 } 00:11:42.946 ] 00:11:42.946 }' 00:11:42.946 18:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:42.946 18:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.514 18:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:43.514 18:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:43.514 18:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:43.514 18:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:43.514 18:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:43.514 18:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:43.514 18:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:43.514 18:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:43.514 [2024-07-15 18:26:28.997906] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:43.514 18:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:43.514 "name": "Existed_Raid", 00:11:43.514 "aliases": [ 00:11:43.514 "949b5f25-8270-4a75-a9f2-623338a45215" 00:11:43.514 ], 00:11:43.514 "product_name": "Raid Volume", 00:11:43.514 "block_size": 512, 00:11:43.514 "num_blocks": 65536, 00:11:43.515 "uuid": "949b5f25-8270-4a75-a9f2-623338a45215", 00:11:43.515 "assigned_rate_limits": { 00:11:43.515 "rw_ios_per_sec": 0, 00:11:43.515 "rw_mbytes_per_sec": 0, 00:11:43.515 "r_mbytes_per_sec": 0, 00:11:43.515 "w_mbytes_per_sec": 0 00:11:43.515 }, 00:11:43.515 "claimed": false, 00:11:43.515 "zoned": false, 00:11:43.515 "supported_io_types": { 00:11:43.515 "read": true, 00:11:43.515 "write": true, 00:11:43.515 "unmap": false, 00:11:43.515 "flush": false, 00:11:43.515 "reset": true, 00:11:43.515 "nvme_admin": false, 00:11:43.515 "nvme_io": false, 00:11:43.515 "nvme_io_md": false, 00:11:43.515 "write_zeroes": true, 00:11:43.515 "zcopy": false, 00:11:43.515 "get_zone_info": false, 00:11:43.515 "zone_management": false, 00:11:43.515 "zone_append": false, 00:11:43.515 "compare": false, 00:11:43.515 "compare_and_write": false, 00:11:43.515 "abort": false, 00:11:43.515 "seek_hole": false, 00:11:43.515 "seek_data": false, 00:11:43.515 "copy": false, 00:11:43.515 "nvme_iov_md": false 00:11:43.515 }, 00:11:43.515 "memory_domains": [ 00:11:43.515 { 00:11:43.515 "dma_device_id": "system", 00:11:43.515 "dma_device_type": 1 00:11:43.515 }, 00:11:43.515 { 00:11:43.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.515 "dma_device_type": 2 00:11:43.515 }, 00:11:43.515 { 00:11:43.515 "dma_device_id": "system", 00:11:43.515 "dma_device_type": 1 00:11:43.515 }, 00:11:43.515 { 00:11:43.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.515 "dma_device_type": 2 00:11:43.515 } 00:11:43.515 ], 00:11:43.515 "driver_specific": { 00:11:43.515 "raid": { 00:11:43.515 "uuid": "949b5f25-8270-4a75-a9f2-623338a45215", 00:11:43.515 "strip_size_kb": 0, 00:11:43.515 "state": "online", 00:11:43.515 "raid_level": "raid1", 00:11:43.515 "superblock": false, 00:11:43.515 "num_base_bdevs": 2, 00:11:43.515 "num_base_bdevs_discovered": 2, 00:11:43.515 "num_base_bdevs_operational": 2, 00:11:43.515 "base_bdevs_list": [ 00:11:43.515 { 00:11:43.515 "name": "BaseBdev1", 00:11:43.515 "uuid": "b6a2894e-4624-4eb0-9533-ab11145681e4", 00:11:43.515 "is_configured": true, 00:11:43.515 "data_offset": 0, 00:11:43.515 "data_size": 65536 00:11:43.515 }, 00:11:43.515 { 00:11:43.515 "name": "BaseBdev2", 00:11:43.515 "uuid": "de8799e8-15b2-4faa-aa3d-5c0037f3142d", 00:11:43.515 "is_configured": true, 00:11:43.515 "data_offset": 0, 00:11:43.515 "data_size": 65536 00:11:43.515 } 00:11:43.515 ] 00:11:43.515 } 00:11:43.515 } 00:11:43.515 }' 00:11:43.515 18:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:43.774 18:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:43.774 BaseBdev2' 00:11:43.774 18:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:43.774 18:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:43.774 18:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:44.033 18:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:44.033 "name": "BaseBdev1", 00:11:44.033 "aliases": [ 00:11:44.033 "b6a2894e-4624-4eb0-9533-ab11145681e4" 00:11:44.033 ], 00:11:44.033 "product_name": "Malloc disk", 00:11:44.033 "block_size": 512, 00:11:44.033 "num_blocks": 65536, 00:11:44.033 "uuid": "b6a2894e-4624-4eb0-9533-ab11145681e4", 00:11:44.033 "assigned_rate_limits": { 00:11:44.033 "rw_ios_per_sec": 0, 00:11:44.033 "rw_mbytes_per_sec": 0, 00:11:44.033 "r_mbytes_per_sec": 0, 00:11:44.033 "w_mbytes_per_sec": 0 00:11:44.033 }, 00:11:44.033 "claimed": true, 00:11:44.033 "claim_type": "exclusive_write", 00:11:44.033 "zoned": false, 00:11:44.033 "supported_io_types": { 00:11:44.033 "read": true, 00:11:44.033 "write": true, 00:11:44.033 "unmap": true, 00:11:44.033 "flush": true, 00:11:44.033 "reset": true, 00:11:44.033 "nvme_admin": false, 00:11:44.033 "nvme_io": false, 00:11:44.033 "nvme_io_md": false, 00:11:44.033 "write_zeroes": true, 00:11:44.033 "zcopy": true, 00:11:44.033 "get_zone_info": false, 00:11:44.033 "zone_management": false, 00:11:44.033 "zone_append": false, 00:11:44.033 "compare": false, 00:11:44.033 "compare_and_write": false, 00:11:44.033 "abort": true, 00:11:44.033 "seek_hole": false, 00:11:44.033 "seek_data": false, 00:11:44.033 "copy": true, 00:11:44.033 "nvme_iov_md": false 00:11:44.033 }, 00:11:44.033 "memory_domains": [ 00:11:44.033 { 00:11:44.033 "dma_device_id": "system", 00:11:44.033 "dma_device_type": 1 00:11:44.033 }, 00:11:44.033 { 00:11:44.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.033 "dma_device_type": 2 00:11:44.033 } 00:11:44.033 ], 00:11:44.033 "driver_specific": {} 00:11:44.033 }' 00:11:44.033 18:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:44.033 18:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:44.033 18:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:44.033 18:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:44.033 18:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:44.033 18:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:44.033 18:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:44.033 18:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:44.292 18:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:44.292 18:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:44.292 18:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:44.292 18:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:44.292 18:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:44.292 18:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:44.292 18:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:44.551 18:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:44.551 "name": "BaseBdev2", 00:11:44.551 "aliases": [ 00:11:44.551 "de8799e8-15b2-4faa-aa3d-5c0037f3142d" 00:11:44.551 ], 00:11:44.551 "product_name": "Malloc disk", 00:11:44.551 "block_size": 512, 00:11:44.551 "num_blocks": 65536, 00:11:44.551 "uuid": "de8799e8-15b2-4faa-aa3d-5c0037f3142d", 00:11:44.551 "assigned_rate_limits": { 00:11:44.551 "rw_ios_per_sec": 0, 00:11:44.551 "rw_mbytes_per_sec": 0, 00:11:44.551 "r_mbytes_per_sec": 0, 00:11:44.551 "w_mbytes_per_sec": 0 00:11:44.551 }, 00:11:44.551 "claimed": true, 00:11:44.551 "claim_type": "exclusive_write", 00:11:44.551 "zoned": false, 00:11:44.551 "supported_io_types": { 00:11:44.551 "read": true, 00:11:44.551 "write": true, 00:11:44.551 "unmap": true, 00:11:44.551 "flush": true, 00:11:44.551 "reset": true, 00:11:44.551 "nvme_admin": false, 00:11:44.551 "nvme_io": false, 00:11:44.551 "nvme_io_md": false, 00:11:44.551 "write_zeroes": true, 00:11:44.551 "zcopy": true, 00:11:44.551 "get_zone_info": false, 00:11:44.551 "zone_management": false, 00:11:44.551 "zone_append": false, 00:11:44.551 "compare": false, 00:11:44.551 "compare_and_write": false, 00:11:44.551 "abort": true, 00:11:44.551 "seek_hole": false, 00:11:44.551 "seek_data": false, 00:11:44.551 "copy": true, 00:11:44.551 "nvme_iov_md": false 00:11:44.551 }, 00:11:44.551 "memory_domains": [ 00:11:44.551 { 00:11:44.551 "dma_device_id": "system", 00:11:44.551 "dma_device_type": 1 00:11:44.551 }, 00:11:44.551 { 00:11:44.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.551 "dma_device_type": 2 00:11:44.551 } 00:11:44.551 ], 00:11:44.551 "driver_specific": {} 00:11:44.551 }' 00:11:44.551 18:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:44.551 18:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:44.551 18:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:44.551 18:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:44.551 18:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:44.810 18:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:44.810 18:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:44.810 18:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:44.810 18:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:44.810 18:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:44.810 18:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:44.810 18:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:44.810 18:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:45.068 [2024-07-15 18:26:30.509766] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:45.068 18:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:45.068 18:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:11:45.068 18:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:45.068 18:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:11:45.068 18:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:11:45.068 18:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:11:45.068 18:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:45.068 18:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:45.068 18:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:45.068 18:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:45.068 18:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:45.068 18:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:45.068 18:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:45.068 18:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:45.068 18:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:45.068 18:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:45.068 18:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.327 18:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:45.327 "name": "Existed_Raid", 00:11:45.327 "uuid": "949b5f25-8270-4a75-a9f2-623338a45215", 00:11:45.327 "strip_size_kb": 0, 00:11:45.327 "state": "online", 00:11:45.327 "raid_level": "raid1", 00:11:45.327 "superblock": false, 00:11:45.327 "num_base_bdevs": 2, 00:11:45.327 "num_base_bdevs_discovered": 1, 00:11:45.327 "num_base_bdevs_operational": 1, 00:11:45.327 "base_bdevs_list": [ 00:11:45.327 { 00:11:45.327 "name": null, 00:11:45.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.327 "is_configured": false, 00:11:45.327 "data_offset": 0, 00:11:45.327 "data_size": 65536 00:11:45.327 }, 00:11:45.327 { 00:11:45.327 "name": "BaseBdev2", 00:11:45.327 "uuid": "de8799e8-15b2-4faa-aa3d-5c0037f3142d", 00:11:45.327 "is_configured": true, 00:11:45.327 "data_offset": 0, 00:11:45.327 "data_size": 65536 00:11:45.327 } 00:11:45.327 ] 00:11:45.327 }' 00:11:45.327 18:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:45.327 18:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.894 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:45.894 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:45.894 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:45.894 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:46.152 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:46.153 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:46.153 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:46.410 [2024-07-15 18:26:31.922705] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:46.410 [2024-07-15 18:26:31.922786] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:46.410 [2024-07-15 18:26:31.933877] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:46.410 [2024-07-15 18:26:31.933913] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:46.410 [2024-07-15 18:26:31.933921] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20cc260 name Existed_Raid, state offline 00:11:46.410 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:46.410 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:46.410 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:46.410 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:46.668 18:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:46.668 18:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:46.668 18:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:11:46.668 18:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2772733 00:11:46.668 18:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2772733 ']' 00:11:46.668 18:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2772733 00:11:46.668 18:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:11:46.668 18:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:46.668 18:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2772733 00:11:46.927 18:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:46.927 18:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:46.927 18:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2772733' 00:11:46.927 killing process with pid 2772733 00:11:46.927 18:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2772733 00:11:46.927 [2024-07-15 18:26:32.256992] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:46.927 18:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2772733 00:11:46.927 [2024-07-15 18:26:32.257852] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:46.927 18:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:11:46.927 00:11:46.927 real 0m11.145s 00:11:46.927 user 0m20.357s 00:11:46.927 sys 0m1.570s 00:11:46.927 18:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:46.927 18:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.927 ************************************ 00:11:46.927 END TEST raid_state_function_test 00:11:46.927 ************************************ 00:11:47.186 18:26:32 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:47.186 18:26:32 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:11:47.186 18:26:32 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:47.186 18:26:32 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:47.186 18:26:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:47.186 ************************************ 00:11:47.186 START TEST raid_state_function_test_sb 00:11:47.186 ************************************ 00:11:47.186 18:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:11:47.186 18:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:11:47.186 18:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:11:47.186 18:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:11:47.186 18:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:47.186 18:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:47.186 18:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:47.186 18:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:47.186 18:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:47.186 18:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:47.186 18:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:47.186 18:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:47.186 18:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:47.186 18:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:47.186 18:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:47.186 18:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:47.186 18:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:47.186 18:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:47.186 18:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:47.186 18:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:11:47.186 18:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:11:47.186 18:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:11:47.186 18:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:11:47.186 18:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2774662 00:11:47.186 18:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2774662' 00:11:47.187 Process raid pid: 2774662 00:11:47.187 18:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:47.187 18:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2774662 /var/tmp/spdk-raid.sock 00:11:47.187 18:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2774662 ']' 00:11:47.187 18:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:47.187 18:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:47.187 18:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:47.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:47.187 18:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:47.187 18:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.187 [2024-07-15 18:26:32.563563] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:11:47.187 [2024-07-15 18:26:32.563623] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.187 [2024-07-15 18:26:32.663526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.446 [2024-07-15 18:26:32.759802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.446 [2024-07-15 18:26:32.818986] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:47.446 [2024-07-15 18:26:32.819018] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:48.045 18:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:48.045 18:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:11:48.045 18:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:48.322 [2024-07-15 18:26:33.755108] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:48.322 [2024-07-15 18:26:33.755147] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:48.322 [2024-07-15 18:26:33.755156] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:48.322 [2024-07-15 18:26:33.755165] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:48.322 18:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:48.322 18:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:48.322 18:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:48.322 18:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:48.322 18:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:48.322 18:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:48.322 18:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:48.322 18:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:48.322 18:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:48.322 18:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:48.322 18:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:48.322 18:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.581 18:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:48.581 "name": "Existed_Raid", 00:11:48.581 "uuid": "4a5497ee-c2d3-4c2c-9c03-1be21214cb02", 00:11:48.581 "strip_size_kb": 0, 00:11:48.581 "state": "configuring", 00:11:48.581 "raid_level": "raid1", 00:11:48.581 "superblock": true, 00:11:48.581 "num_base_bdevs": 2, 00:11:48.581 "num_base_bdevs_discovered": 0, 00:11:48.581 "num_base_bdevs_operational": 2, 00:11:48.581 "base_bdevs_list": [ 00:11:48.581 { 00:11:48.581 "name": "BaseBdev1", 00:11:48.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.581 "is_configured": false, 00:11:48.581 "data_offset": 0, 00:11:48.581 "data_size": 0 00:11:48.581 }, 00:11:48.581 { 00:11:48.581 "name": "BaseBdev2", 00:11:48.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.581 "is_configured": false, 00:11:48.581 "data_offset": 0, 00:11:48.581 "data_size": 0 00:11:48.581 } 00:11:48.581 ] 00:11:48.581 }' 00:11:48.581 18:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:48.581 18:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.149 18:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:49.407 [2024-07-15 18:26:34.877998] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:49.407 [2024-07-15 18:26:34.878027] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2293b80 name Existed_Raid, state configuring 00:11:49.407 18:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:49.666 [2024-07-15 18:26:35.134691] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:49.666 [2024-07-15 18:26:35.134718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:49.666 [2024-07-15 18:26:35.134726] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:49.666 [2024-07-15 18:26:35.134734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:49.666 18:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:49.925 [2024-07-15 18:26:35.396878] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:49.925 BaseBdev1 00:11:49.925 18:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:49.925 18:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:49.925 18:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:49.925 18:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:49.925 18:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:49.925 18:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:49.925 18:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:50.184 18:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:50.442 [ 00:11:50.442 { 00:11:50.442 "name": "BaseBdev1", 00:11:50.442 "aliases": [ 00:11:50.442 "183dcdad-e0b3-49e9-8958-da27fc1b833f" 00:11:50.442 ], 00:11:50.442 "product_name": "Malloc disk", 00:11:50.442 "block_size": 512, 00:11:50.442 "num_blocks": 65536, 00:11:50.442 "uuid": "183dcdad-e0b3-49e9-8958-da27fc1b833f", 00:11:50.442 "assigned_rate_limits": { 00:11:50.442 "rw_ios_per_sec": 0, 00:11:50.442 "rw_mbytes_per_sec": 0, 00:11:50.442 "r_mbytes_per_sec": 0, 00:11:50.442 "w_mbytes_per_sec": 0 00:11:50.442 }, 00:11:50.442 "claimed": true, 00:11:50.442 "claim_type": "exclusive_write", 00:11:50.442 "zoned": false, 00:11:50.442 "supported_io_types": { 00:11:50.442 "read": true, 00:11:50.442 "write": true, 00:11:50.442 "unmap": true, 00:11:50.442 "flush": true, 00:11:50.442 "reset": true, 00:11:50.442 "nvme_admin": false, 00:11:50.442 "nvme_io": false, 00:11:50.442 "nvme_io_md": false, 00:11:50.442 "write_zeroes": true, 00:11:50.442 "zcopy": true, 00:11:50.442 "get_zone_info": false, 00:11:50.442 "zone_management": false, 00:11:50.442 "zone_append": false, 00:11:50.442 "compare": false, 00:11:50.442 "compare_and_write": false, 00:11:50.442 "abort": true, 00:11:50.442 "seek_hole": false, 00:11:50.442 "seek_data": false, 00:11:50.442 "copy": true, 00:11:50.442 "nvme_iov_md": false 00:11:50.442 }, 00:11:50.442 "memory_domains": [ 00:11:50.442 { 00:11:50.442 "dma_device_id": "system", 00:11:50.442 "dma_device_type": 1 00:11:50.442 }, 00:11:50.442 { 00:11:50.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.442 "dma_device_type": 2 00:11:50.442 } 00:11:50.442 ], 00:11:50.442 "driver_specific": {} 00:11:50.442 } 00:11:50.442 ] 00:11:50.442 18:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:50.442 18:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:50.442 18:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:50.442 18:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:50.442 18:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:50.442 18:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:50.443 18:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:50.443 18:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:50.443 18:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:50.443 18:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:50.443 18:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:50.443 18:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:50.443 18:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.701 18:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:50.701 "name": "Existed_Raid", 00:11:50.701 "uuid": "9f57c15d-f291-4e79-ba55-9866e1676b52", 00:11:50.701 "strip_size_kb": 0, 00:11:50.701 "state": "configuring", 00:11:50.701 "raid_level": "raid1", 00:11:50.701 "superblock": true, 00:11:50.701 "num_base_bdevs": 2, 00:11:50.701 "num_base_bdevs_discovered": 1, 00:11:50.701 "num_base_bdevs_operational": 2, 00:11:50.701 "base_bdevs_list": [ 00:11:50.701 { 00:11:50.701 "name": "BaseBdev1", 00:11:50.701 "uuid": "183dcdad-e0b3-49e9-8958-da27fc1b833f", 00:11:50.701 "is_configured": true, 00:11:50.701 "data_offset": 2048, 00:11:50.701 "data_size": 63488 00:11:50.701 }, 00:11:50.701 { 00:11:50.701 "name": "BaseBdev2", 00:11:50.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.701 "is_configured": false, 00:11:50.701 "data_offset": 0, 00:11:50.702 "data_size": 0 00:11:50.702 } 00:11:50.702 ] 00:11:50.702 }' 00:11:50.702 18:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:50.702 18:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.268 18:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:51.526 [2024-07-15 18:26:37.053363] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:51.526 [2024-07-15 18:26:37.053405] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2293470 name Existed_Raid, state configuring 00:11:51.785 18:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:51.785 [2024-07-15 18:26:37.318105] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:51.785 [2024-07-15 18:26:37.319623] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:51.785 [2024-07-15 18:26:37.319653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:51.785 18:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:51.785 18:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:52.044 18:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:52.044 18:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:52.044 18:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:52.044 18:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:52.044 18:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:52.044 18:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:52.044 18:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:52.044 18:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:52.044 18:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:52.044 18:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:52.044 18:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:52.044 18:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.304 18:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:52.304 "name": "Existed_Raid", 00:11:52.304 "uuid": "41f2473b-a4f5-47df-9ef0-acfbd210f072", 00:11:52.304 "strip_size_kb": 0, 00:11:52.304 "state": "configuring", 00:11:52.304 "raid_level": "raid1", 00:11:52.304 "superblock": true, 00:11:52.304 "num_base_bdevs": 2, 00:11:52.304 "num_base_bdevs_discovered": 1, 00:11:52.304 "num_base_bdevs_operational": 2, 00:11:52.304 "base_bdevs_list": [ 00:11:52.304 { 00:11:52.304 "name": "BaseBdev1", 00:11:52.304 "uuid": "183dcdad-e0b3-49e9-8958-da27fc1b833f", 00:11:52.304 "is_configured": true, 00:11:52.304 "data_offset": 2048, 00:11:52.304 "data_size": 63488 00:11:52.304 }, 00:11:52.304 { 00:11:52.304 "name": "BaseBdev2", 00:11:52.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.304 "is_configured": false, 00:11:52.304 "data_offset": 0, 00:11:52.304 "data_size": 0 00:11:52.304 } 00:11:52.304 ] 00:11:52.304 }' 00:11:52.304 18:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:52.304 18:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.871 18:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:53.129 [2024-07-15 18:26:38.460291] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:53.129 [2024-07-15 18:26:38.460432] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2294260 00:11:53.129 [2024-07-15 18:26:38.460445] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:53.129 [2024-07-15 18:26:38.460625] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22933c0 00:11:53.129 [2024-07-15 18:26:38.460750] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2294260 00:11:53.129 [2024-07-15 18:26:38.460759] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2294260 00:11:53.129 [2024-07-15 18:26:38.460859] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.129 BaseBdev2 00:11:53.129 18:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:53.129 18:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:53.129 18:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:53.129 18:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:53.129 18:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:53.129 18:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:53.129 18:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:53.387 18:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:53.647 [ 00:11:53.647 { 00:11:53.647 "name": "BaseBdev2", 00:11:53.647 "aliases": [ 00:11:53.647 "9473a1b8-da29-46c0-b2f1-4634ba73e607" 00:11:53.647 ], 00:11:53.647 "product_name": "Malloc disk", 00:11:53.647 "block_size": 512, 00:11:53.647 "num_blocks": 65536, 00:11:53.647 "uuid": "9473a1b8-da29-46c0-b2f1-4634ba73e607", 00:11:53.647 "assigned_rate_limits": { 00:11:53.647 "rw_ios_per_sec": 0, 00:11:53.647 "rw_mbytes_per_sec": 0, 00:11:53.647 "r_mbytes_per_sec": 0, 00:11:53.647 "w_mbytes_per_sec": 0 00:11:53.647 }, 00:11:53.647 "claimed": true, 00:11:53.647 "claim_type": "exclusive_write", 00:11:53.647 "zoned": false, 00:11:53.647 "supported_io_types": { 00:11:53.647 "read": true, 00:11:53.647 "write": true, 00:11:53.647 "unmap": true, 00:11:53.647 "flush": true, 00:11:53.647 "reset": true, 00:11:53.647 "nvme_admin": false, 00:11:53.647 "nvme_io": false, 00:11:53.647 "nvme_io_md": false, 00:11:53.647 "write_zeroes": true, 00:11:53.647 "zcopy": true, 00:11:53.647 "get_zone_info": false, 00:11:53.647 "zone_management": false, 00:11:53.647 "zone_append": false, 00:11:53.647 "compare": false, 00:11:53.647 "compare_and_write": false, 00:11:53.647 "abort": true, 00:11:53.647 "seek_hole": false, 00:11:53.647 "seek_data": false, 00:11:53.647 "copy": true, 00:11:53.647 "nvme_iov_md": false 00:11:53.647 }, 00:11:53.647 "memory_domains": [ 00:11:53.647 { 00:11:53.647 "dma_device_id": "system", 00:11:53.647 "dma_device_type": 1 00:11:53.647 }, 00:11:53.647 { 00:11:53.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.647 "dma_device_type": 2 00:11:53.647 } 00:11:53.647 ], 00:11:53.647 "driver_specific": {} 00:11:53.647 } 00:11:53.647 ] 00:11:53.647 18:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:53.647 18:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:53.647 18:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:53.647 18:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:53.647 18:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:53.647 18:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:53.647 18:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:53.647 18:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:53.647 18:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:53.647 18:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:53.647 18:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:53.647 18:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:53.647 18:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:53.647 18:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:53.647 18:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.647 18:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:53.647 "name": "Existed_Raid", 00:11:53.647 "uuid": "41f2473b-a4f5-47df-9ef0-acfbd210f072", 00:11:53.647 "strip_size_kb": 0, 00:11:53.647 "state": "online", 00:11:53.647 "raid_level": "raid1", 00:11:53.647 "superblock": true, 00:11:53.647 "num_base_bdevs": 2, 00:11:53.647 "num_base_bdevs_discovered": 2, 00:11:53.647 "num_base_bdevs_operational": 2, 00:11:53.647 "base_bdevs_list": [ 00:11:53.647 { 00:11:53.647 "name": "BaseBdev1", 00:11:53.647 "uuid": "183dcdad-e0b3-49e9-8958-da27fc1b833f", 00:11:53.647 "is_configured": true, 00:11:53.647 "data_offset": 2048, 00:11:53.647 "data_size": 63488 00:11:53.647 }, 00:11:53.647 { 00:11:53.647 "name": "BaseBdev2", 00:11:53.647 "uuid": "9473a1b8-da29-46c0-b2f1-4634ba73e607", 00:11:53.647 "is_configured": true, 00:11:53.647 "data_offset": 2048, 00:11:53.647 "data_size": 63488 00:11:53.647 } 00:11:53.647 ] 00:11:53.647 }' 00:11:53.647 18:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:53.647 18:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.583 18:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:54.583 18:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:54.583 18:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:54.583 18:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:54.583 18:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:54.583 18:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:11:54.583 18:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:54.583 18:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:54.583 [2024-07-15 18:26:40.044868] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:54.583 18:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:54.583 "name": "Existed_Raid", 00:11:54.583 "aliases": [ 00:11:54.583 "41f2473b-a4f5-47df-9ef0-acfbd210f072" 00:11:54.583 ], 00:11:54.583 "product_name": "Raid Volume", 00:11:54.583 "block_size": 512, 00:11:54.583 "num_blocks": 63488, 00:11:54.583 "uuid": "41f2473b-a4f5-47df-9ef0-acfbd210f072", 00:11:54.583 "assigned_rate_limits": { 00:11:54.583 "rw_ios_per_sec": 0, 00:11:54.583 "rw_mbytes_per_sec": 0, 00:11:54.583 "r_mbytes_per_sec": 0, 00:11:54.583 "w_mbytes_per_sec": 0 00:11:54.583 }, 00:11:54.583 "claimed": false, 00:11:54.583 "zoned": false, 00:11:54.583 "supported_io_types": { 00:11:54.583 "read": true, 00:11:54.583 "write": true, 00:11:54.583 "unmap": false, 00:11:54.583 "flush": false, 00:11:54.583 "reset": true, 00:11:54.583 "nvme_admin": false, 00:11:54.583 "nvme_io": false, 00:11:54.583 "nvme_io_md": false, 00:11:54.583 "write_zeroes": true, 00:11:54.583 "zcopy": false, 00:11:54.583 "get_zone_info": false, 00:11:54.583 "zone_management": false, 00:11:54.583 "zone_append": false, 00:11:54.583 "compare": false, 00:11:54.583 "compare_and_write": false, 00:11:54.583 "abort": false, 00:11:54.583 "seek_hole": false, 00:11:54.583 "seek_data": false, 00:11:54.583 "copy": false, 00:11:54.583 "nvme_iov_md": false 00:11:54.583 }, 00:11:54.583 "memory_domains": [ 00:11:54.583 { 00:11:54.583 "dma_device_id": "system", 00:11:54.583 "dma_device_type": 1 00:11:54.583 }, 00:11:54.583 { 00:11:54.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.583 "dma_device_type": 2 00:11:54.583 }, 00:11:54.583 { 00:11:54.583 "dma_device_id": "system", 00:11:54.583 "dma_device_type": 1 00:11:54.583 }, 00:11:54.583 { 00:11:54.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.583 "dma_device_type": 2 00:11:54.583 } 00:11:54.583 ], 00:11:54.583 "driver_specific": { 00:11:54.583 "raid": { 00:11:54.583 "uuid": "41f2473b-a4f5-47df-9ef0-acfbd210f072", 00:11:54.583 "strip_size_kb": 0, 00:11:54.583 "state": "online", 00:11:54.583 "raid_level": "raid1", 00:11:54.583 "superblock": true, 00:11:54.583 "num_base_bdevs": 2, 00:11:54.583 "num_base_bdevs_discovered": 2, 00:11:54.583 "num_base_bdevs_operational": 2, 00:11:54.583 "base_bdevs_list": [ 00:11:54.583 { 00:11:54.583 "name": "BaseBdev1", 00:11:54.584 "uuid": "183dcdad-e0b3-49e9-8958-da27fc1b833f", 00:11:54.584 "is_configured": true, 00:11:54.584 "data_offset": 2048, 00:11:54.584 "data_size": 63488 00:11:54.584 }, 00:11:54.584 { 00:11:54.584 "name": "BaseBdev2", 00:11:54.584 "uuid": "9473a1b8-da29-46c0-b2f1-4634ba73e607", 00:11:54.584 "is_configured": true, 00:11:54.584 "data_offset": 2048, 00:11:54.584 "data_size": 63488 00:11:54.584 } 00:11:54.584 ] 00:11:54.584 } 00:11:54.584 } 00:11:54.584 }' 00:11:54.584 18:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:54.584 18:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:54.584 BaseBdev2' 00:11:54.584 18:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:54.584 18:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:54.584 18:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:54.842 18:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:54.842 "name": "BaseBdev1", 00:11:54.842 "aliases": [ 00:11:54.842 "183dcdad-e0b3-49e9-8958-da27fc1b833f" 00:11:54.842 ], 00:11:54.842 "product_name": "Malloc disk", 00:11:54.842 "block_size": 512, 00:11:54.842 "num_blocks": 65536, 00:11:54.842 "uuid": "183dcdad-e0b3-49e9-8958-da27fc1b833f", 00:11:54.842 "assigned_rate_limits": { 00:11:54.842 "rw_ios_per_sec": 0, 00:11:54.842 "rw_mbytes_per_sec": 0, 00:11:54.842 "r_mbytes_per_sec": 0, 00:11:54.843 "w_mbytes_per_sec": 0 00:11:54.843 }, 00:11:54.843 "claimed": true, 00:11:54.843 "claim_type": "exclusive_write", 00:11:54.843 "zoned": false, 00:11:54.843 "supported_io_types": { 00:11:54.843 "read": true, 00:11:54.843 "write": true, 00:11:54.843 "unmap": true, 00:11:54.843 "flush": true, 00:11:54.843 "reset": true, 00:11:54.843 "nvme_admin": false, 00:11:54.843 "nvme_io": false, 00:11:54.843 "nvme_io_md": false, 00:11:54.843 "write_zeroes": true, 00:11:54.843 "zcopy": true, 00:11:54.843 "get_zone_info": false, 00:11:54.843 "zone_management": false, 00:11:54.843 "zone_append": false, 00:11:54.843 "compare": false, 00:11:54.843 "compare_and_write": false, 00:11:54.843 "abort": true, 00:11:54.843 "seek_hole": false, 00:11:54.843 "seek_data": false, 00:11:54.843 "copy": true, 00:11:54.843 "nvme_iov_md": false 00:11:54.843 }, 00:11:54.843 "memory_domains": [ 00:11:54.843 { 00:11:54.843 "dma_device_id": "system", 00:11:54.843 "dma_device_type": 1 00:11:54.843 }, 00:11:54.843 { 00:11:54.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.843 "dma_device_type": 2 00:11:54.843 } 00:11:54.843 ], 00:11:54.843 "driver_specific": {} 00:11:54.843 }' 00:11:54.843 18:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:55.101 18:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:55.101 18:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:55.101 18:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:55.101 18:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:55.101 18:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:55.101 18:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:55.101 18:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:55.101 18:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:55.101 18:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:55.360 18:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:55.360 18:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:55.360 18:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:55.360 18:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:55.360 18:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:55.618 18:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:55.618 "name": "BaseBdev2", 00:11:55.618 "aliases": [ 00:11:55.618 "9473a1b8-da29-46c0-b2f1-4634ba73e607" 00:11:55.618 ], 00:11:55.618 "product_name": "Malloc disk", 00:11:55.618 "block_size": 512, 00:11:55.618 "num_blocks": 65536, 00:11:55.619 "uuid": "9473a1b8-da29-46c0-b2f1-4634ba73e607", 00:11:55.619 "assigned_rate_limits": { 00:11:55.619 "rw_ios_per_sec": 0, 00:11:55.619 "rw_mbytes_per_sec": 0, 00:11:55.619 "r_mbytes_per_sec": 0, 00:11:55.619 "w_mbytes_per_sec": 0 00:11:55.619 }, 00:11:55.619 "claimed": true, 00:11:55.619 "claim_type": "exclusive_write", 00:11:55.619 "zoned": false, 00:11:55.619 "supported_io_types": { 00:11:55.619 "read": true, 00:11:55.619 "write": true, 00:11:55.619 "unmap": true, 00:11:55.619 "flush": true, 00:11:55.619 "reset": true, 00:11:55.619 "nvme_admin": false, 00:11:55.619 "nvme_io": false, 00:11:55.619 "nvme_io_md": false, 00:11:55.619 "write_zeroes": true, 00:11:55.619 "zcopy": true, 00:11:55.619 "get_zone_info": false, 00:11:55.619 "zone_management": false, 00:11:55.619 "zone_append": false, 00:11:55.619 "compare": false, 00:11:55.619 "compare_and_write": false, 00:11:55.619 "abort": true, 00:11:55.619 "seek_hole": false, 00:11:55.619 "seek_data": false, 00:11:55.619 "copy": true, 00:11:55.619 "nvme_iov_md": false 00:11:55.619 }, 00:11:55.619 "memory_domains": [ 00:11:55.619 { 00:11:55.619 "dma_device_id": "system", 00:11:55.619 "dma_device_type": 1 00:11:55.619 }, 00:11:55.619 { 00:11:55.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.619 "dma_device_type": 2 00:11:55.619 } 00:11:55.619 ], 00:11:55.619 "driver_specific": {} 00:11:55.619 }' 00:11:55.619 18:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:55.619 18:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:55.619 18:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:55.619 18:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:55.619 18:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:55.878 18:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:55.878 18:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:55.878 18:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:55.878 18:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:55.878 18:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:55.878 18:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:55.878 18:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:55.878 18:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:56.137 [2024-07-15 18:26:41.608847] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:56.137 18:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:56.137 18:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:11:56.137 18:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:56.137 18:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:11:56.137 18:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:11:56.137 18:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:11:56.137 18:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:56.137 18:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:56.137 18:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:56.137 18:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:56.137 18:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:56.137 18:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:56.137 18:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:56.137 18:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:56.137 18:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:56.137 18:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:56.137 18:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.395 18:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:56.395 "name": "Existed_Raid", 00:11:56.395 "uuid": "41f2473b-a4f5-47df-9ef0-acfbd210f072", 00:11:56.395 "strip_size_kb": 0, 00:11:56.396 "state": "online", 00:11:56.396 "raid_level": "raid1", 00:11:56.396 "superblock": true, 00:11:56.396 "num_base_bdevs": 2, 00:11:56.396 "num_base_bdevs_discovered": 1, 00:11:56.396 "num_base_bdevs_operational": 1, 00:11:56.396 "base_bdevs_list": [ 00:11:56.396 { 00:11:56.396 "name": null, 00:11:56.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.396 "is_configured": false, 00:11:56.396 "data_offset": 2048, 00:11:56.396 "data_size": 63488 00:11:56.396 }, 00:11:56.396 { 00:11:56.396 "name": "BaseBdev2", 00:11:56.396 "uuid": "9473a1b8-da29-46c0-b2f1-4634ba73e607", 00:11:56.396 "is_configured": true, 00:11:56.396 "data_offset": 2048, 00:11:56.396 "data_size": 63488 00:11:56.396 } 00:11:56.396 ] 00:11:56.396 }' 00:11:56.396 18:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:56.396 18:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.962 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:56.962 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:56.962 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:56.962 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:57.220 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:57.220 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:57.220 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:57.479 [2024-07-15 18:26:42.981613] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:57.479 [2024-07-15 18:26:42.981696] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:57.479 [2024-07-15 18:26:42.992386] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.479 [2024-07-15 18:26:42.992418] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:57.479 [2024-07-15 18:26:42.992427] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2294260 name Existed_Raid, state offline 00:11:57.479 18:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:57.479 18:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:57.479 18:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:57.479 18:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:57.737 18:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:57.737 18:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:57.737 18:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:11:57.737 18:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2774662 00:11:57.737 18:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2774662 ']' 00:11:57.737 18:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2774662 00:11:57.737 18:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:11:57.737 18:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:57.737 18:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2774662 00:11:57.994 18:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:57.994 18:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:57.994 18:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2774662' 00:11:57.994 killing process with pid 2774662 00:11:57.994 18:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2774662 00:11:57.994 [2024-07-15 18:26:43.316772] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:57.994 18:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2774662 00:11:57.994 [2024-07-15 18:26:43.317621] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:57.994 18:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:11:57.994 00:11:57.994 real 0m11.014s 00:11:57.994 user 0m20.072s 00:11:57.994 sys 0m1.620s 00:11:57.994 18:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:57.994 18:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.994 ************************************ 00:11:57.994 END TEST raid_state_function_test_sb 00:11:57.994 ************************************ 00:11:58.252 18:26:43 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:58.253 18:26:43 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:11:58.253 18:26:43 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:58.253 18:26:43 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:58.253 18:26:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:58.253 ************************************ 00:11:58.253 START TEST raid_superblock_test 00:11:58.253 ************************************ 00:11:58.253 18:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:11:58.253 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:11:58.253 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:11:58.253 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:11:58.253 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:11:58.253 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:11:58.253 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:11:58.253 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:11:58.253 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:11:58.253 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:11:58.253 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:11:58.253 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:11:58.253 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:11:58.253 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:11:58.253 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:11:58.253 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:11:58.253 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2776596 00:11:58.253 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2776596 /var/tmp/spdk-raid.sock 00:11:58.253 18:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:11:58.253 18:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2776596 ']' 00:11:58.253 18:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:58.253 18:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:58.253 18:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:58.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:58.253 18:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:58.253 18:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.253 [2024-07-15 18:26:43.651274] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:11:58.253 [2024-07-15 18:26:43.651380] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2776596 ] 00:11:58.253 [2024-07-15 18:26:43.786839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.511 [2024-07-15 18:26:43.882130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.511 [2024-07-15 18:26:43.945873] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:58.511 [2024-07-15 18:26:43.945904] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:59.078 18:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:59.078 18:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:11:59.078 18:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:11:59.078 18:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:59.078 18:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:11:59.078 18:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:11:59.078 18:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:59.078 18:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:59.078 18:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:59.078 18:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:59.078 18:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:11:59.336 malloc1 00:11:59.336 18:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:59.594 [2024-07-15 18:26:45.071806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:59.594 [2024-07-15 18:26:45.071850] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.594 [2024-07-15 18:26:45.071867] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25d8e20 00:11:59.594 [2024-07-15 18:26:45.071876] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.594 [2024-07-15 18:26:45.073599] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.594 [2024-07-15 18:26:45.073625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:59.594 pt1 00:11:59.594 18:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:59.594 18:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:59.594 18:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:11:59.594 18:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:11:59.594 18:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:59.594 18:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:59.594 18:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:59.594 18:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:59.594 18:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:11:59.852 malloc2 00:11:59.852 18:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:00.111 [2024-07-15 18:26:45.589944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:00.111 [2024-07-15 18:26:45.589998] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.111 [2024-07-15 18:26:45.590013] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2782ed0 00:12:00.111 [2024-07-15 18:26:45.590023] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.111 [2024-07-15 18:26:45.591658] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.111 [2024-07-15 18:26:45.591685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:00.111 pt2 00:12:00.111 18:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:00.111 18:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:00.111 18:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:12:00.370 [2024-07-15 18:26:45.842626] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:00.370 [2024-07-15 18:26:45.843995] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:00.370 [2024-07-15 18:26:45.844142] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2782170 00:12:00.370 [2024-07-15 18:26:45.844154] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:00.370 [2024-07-15 18:26:45.844353] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x27845d0 00:12:00.370 [2024-07-15 18:26:45.844500] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2782170 00:12:00.370 [2024-07-15 18:26:45.844509] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2782170 00:12:00.370 [2024-07-15 18:26:45.844610] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.370 18:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:00.370 18:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:00.370 18:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:00.370 18:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:00.370 18:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:00.370 18:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:00.370 18:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:00.370 18:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:00.370 18:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:00.370 18:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:00.370 18:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:00.370 18:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.629 18:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:00.629 "name": "raid_bdev1", 00:12:00.629 "uuid": "2c2b4153-f596-4462-b5be-288348069f4c", 00:12:00.629 "strip_size_kb": 0, 00:12:00.629 "state": "online", 00:12:00.629 "raid_level": "raid1", 00:12:00.629 "superblock": true, 00:12:00.629 "num_base_bdevs": 2, 00:12:00.629 "num_base_bdevs_discovered": 2, 00:12:00.629 "num_base_bdevs_operational": 2, 00:12:00.629 "base_bdevs_list": [ 00:12:00.629 { 00:12:00.629 "name": "pt1", 00:12:00.629 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:00.629 "is_configured": true, 00:12:00.629 "data_offset": 2048, 00:12:00.629 "data_size": 63488 00:12:00.629 }, 00:12:00.629 { 00:12:00.629 "name": "pt2", 00:12:00.629 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:00.629 "is_configured": true, 00:12:00.629 "data_offset": 2048, 00:12:00.629 "data_size": 63488 00:12:00.629 } 00:12:00.629 ] 00:12:00.629 }' 00:12:00.629 18:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:00.629 18:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.567 18:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:12:01.567 18:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:01.567 18:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:01.567 18:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:01.567 18:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:01.567 18:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:01.567 18:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:01.567 18:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:01.567 [2024-07-15 18:26:47.010057] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:01.567 18:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:01.567 "name": "raid_bdev1", 00:12:01.567 "aliases": [ 00:12:01.567 "2c2b4153-f596-4462-b5be-288348069f4c" 00:12:01.567 ], 00:12:01.567 "product_name": "Raid Volume", 00:12:01.567 "block_size": 512, 00:12:01.567 "num_blocks": 63488, 00:12:01.567 "uuid": "2c2b4153-f596-4462-b5be-288348069f4c", 00:12:01.567 "assigned_rate_limits": { 00:12:01.567 "rw_ios_per_sec": 0, 00:12:01.567 "rw_mbytes_per_sec": 0, 00:12:01.567 "r_mbytes_per_sec": 0, 00:12:01.567 "w_mbytes_per_sec": 0 00:12:01.567 }, 00:12:01.567 "claimed": false, 00:12:01.567 "zoned": false, 00:12:01.567 "supported_io_types": { 00:12:01.567 "read": true, 00:12:01.567 "write": true, 00:12:01.567 "unmap": false, 00:12:01.567 "flush": false, 00:12:01.567 "reset": true, 00:12:01.567 "nvme_admin": false, 00:12:01.567 "nvme_io": false, 00:12:01.567 "nvme_io_md": false, 00:12:01.567 "write_zeroes": true, 00:12:01.567 "zcopy": false, 00:12:01.567 "get_zone_info": false, 00:12:01.567 "zone_management": false, 00:12:01.567 "zone_append": false, 00:12:01.567 "compare": false, 00:12:01.567 "compare_and_write": false, 00:12:01.567 "abort": false, 00:12:01.567 "seek_hole": false, 00:12:01.567 "seek_data": false, 00:12:01.567 "copy": false, 00:12:01.567 "nvme_iov_md": false 00:12:01.567 }, 00:12:01.567 "memory_domains": [ 00:12:01.567 { 00:12:01.567 "dma_device_id": "system", 00:12:01.567 "dma_device_type": 1 00:12:01.567 }, 00:12:01.567 { 00:12:01.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.567 "dma_device_type": 2 00:12:01.567 }, 00:12:01.567 { 00:12:01.567 "dma_device_id": "system", 00:12:01.567 "dma_device_type": 1 00:12:01.567 }, 00:12:01.567 { 00:12:01.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.567 "dma_device_type": 2 00:12:01.567 } 00:12:01.567 ], 00:12:01.567 "driver_specific": { 00:12:01.567 "raid": { 00:12:01.567 "uuid": "2c2b4153-f596-4462-b5be-288348069f4c", 00:12:01.567 "strip_size_kb": 0, 00:12:01.567 "state": "online", 00:12:01.567 "raid_level": "raid1", 00:12:01.567 "superblock": true, 00:12:01.567 "num_base_bdevs": 2, 00:12:01.567 "num_base_bdevs_discovered": 2, 00:12:01.567 "num_base_bdevs_operational": 2, 00:12:01.567 "base_bdevs_list": [ 00:12:01.567 { 00:12:01.567 "name": "pt1", 00:12:01.567 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:01.567 "is_configured": true, 00:12:01.567 "data_offset": 2048, 00:12:01.567 "data_size": 63488 00:12:01.567 }, 00:12:01.567 { 00:12:01.567 "name": "pt2", 00:12:01.567 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:01.567 "is_configured": true, 00:12:01.567 "data_offset": 2048, 00:12:01.567 "data_size": 63488 00:12:01.567 } 00:12:01.567 ] 00:12:01.567 } 00:12:01.567 } 00:12:01.567 }' 00:12:01.567 18:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:01.567 18:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:01.567 pt2' 00:12:01.567 18:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:01.567 18:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:01.567 18:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:01.827 18:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:01.827 "name": "pt1", 00:12:01.827 "aliases": [ 00:12:01.827 "00000000-0000-0000-0000-000000000001" 00:12:01.827 ], 00:12:01.827 "product_name": "passthru", 00:12:01.827 "block_size": 512, 00:12:01.827 "num_blocks": 65536, 00:12:01.827 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:01.827 "assigned_rate_limits": { 00:12:01.827 "rw_ios_per_sec": 0, 00:12:01.827 "rw_mbytes_per_sec": 0, 00:12:01.827 "r_mbytes_per_sec": 0, 00:12:01.827 "w_mbytes_per_sec": 0 00:12:01.827 }, 00:12:01.827 "claimed": true, 00:12:01.827 "claim_type": "exclusive_write", 00:12:01.827 "zoned": false, 00:12:01.827 "supported_io_types": { 00:12:01.827 "read": true, 00:12:01.827 "write": true, 00:12:01.827 "unmap": true, 00:12:01.827 "flush": true, 00:12:01.827 "reset": true, 00:12:01.827 "nvme_admin": false, 00:12:01.827 "nvme_io": false, 00:12:01.827 "nvme_io_md": false, 00:12:01.827 "write_zeroes": true, 00:12:01.827 "zcopy": true, 00:12:01.827 "get_zone_info": false, 00:12:01.827 "zone_management": false, 00:12:01.827 "zone_append": false, 00:12:01.827 "compare": false, 00:12:01.827 "compare_and_write": false, 00:12:01.827 "abort": true, 00:12:01.827 "seek_hole": false, 00:12:01.827 "seek_data": false, 00:12:01.827 "copy": true, 00:12:01.827 "nvme_iov_md": false 00:12:01.827 }, 00:12:01.827 "memory_domains": [ 00:12:01.827 { 00:12:01.827 "dma_device_id": "system", 00:12:01.827 "dma_device_type": 1 00:12:01.827 }, 00:12:01.827 { 00:12:01.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.827 "dma_device_type": 2 00:12:01.827 } 00:12:01.827 ], 00:12:01.827 "driver_specific": { 00:12:01.827 "passthru": { 00:12:01.827 "name": "pt1", 00:12:01.827 "base_bdev_name": "malloc1" 00:12:01.827 } 00:12:01.827 } 00:12:01.827 }' 00:12:01.827 18:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:01.827 18:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:02.086 18:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:02.086 18:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:02.086 18:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:02.086 18:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:02.086 18:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:02.086 18:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:02.086 18:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:02.086 18:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:02.380 18:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:02.380 18:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:02.380 18:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:02.380 18:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:02.380 18:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:02.380 18:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:02.380 "name": "pt2", 00:12:02.380 "aliases": [ 00:12:02.380 "00000000-0000-0000-0000-000000000002" 00:12:02.380 ], 00:12:02.380 "product_name": "passthru", 00:12:02.380 "block_size": 512, 00:12:02.380 "num_blocks": 65536, 00:12:02.380 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:02.380 "assigned_rate_limits": { 00:12:02.380 "rw_ios_per_sec": 0, 00:12:02.380 "rw_mbytes_per_sec": 0, 00:12:02.380 "r_mbytes_per_sec": 0, 00:12:02.380 "w_mbytes_per_sec": 0 00:12:02.380 }, 00:12:02.380 "claimed": true, 00:12:02.380 "claim_type": "exclusive_write", 00:12:02.380 "zoned": false, 00:12:02.380 "supported_io_types": { 00:12:02.380 "read": true, 00:12:02.380 "write": true, 00:12:02.380 "unmap": true, 00:12:02.380 "flush": true, 00:12:02.380 "reset": true, 00:12:02.380 "nvme_admin": false, 00:12:02.380 "nvme_io": false, 00:12:02.380 "nvme_io_md": false, 00:12:02.380 "write_zeroes": true, 00:12:02.380 "zcopy": true, 00:12:02.380 "get_zone_info": false, 00:12:02.380 "zone_management": false, 00:12:02.380 "zone_append": false, 00:12:02.380 "compare": false, 00:12:02.380 "compare_and_write": false, 00:12:02.380 "abort": true, 00:12:02.380 "seek_hole": false, 00:12:02.380 "seek_data": false, 00:12:02.380 "copy": true, 00:12:02.380 "nvme_iov_md": false 00:12:02.380 }, 00:12:02.380 "memory_domains": [ 00:12:02.380 { 00:12:02.380 "dma_device_id": "system", 00:12:02.380 "dma_device_type": 1 00:12:02.380 }, 00:12:02.380 { 00:12:02.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.380 "dma_device_type": 2 00:12:02.380 } 00:12:02.380 ], 00:12:02.380 "driver_specific": { 00:12:02.380 "passthru": { 00:12:02.380 "name": "pt2", 00:12:02.380 "base_bdev_name": "malloc2" 00:12:02.380 } 00:12:02.380 } 00:12:02.380 }' 00:12:02.380 18:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:02.380 18:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:02.666 18:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:02.666 18:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:02.666 18:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:02.666 18:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:02.666 18:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:02.666 18:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:02.666 18:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:02.666 18:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:02.666 18:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:02.925 18:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:02.925 18:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:02.925 18:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:12:03.184 [2024-07-15 18:26:48.482007] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:03.184 18:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=2c2b4153-f596-4462-b5be-288348069f4c 00:12:03.184 18:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 2c2b4153-f596-4462-b5be-288348069f4c ']' 00:12:03.184 18:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:03.442 [2024-07-15 18:26:48.738413] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:03.442 [2024-07-15 18:26:48.738434] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:03.442 [2024-07-15 18:26:48.738486] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:03.442 [2024-07-15 18:26:48.738540] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:03.442 [2024-07-15 18:26:48.738549] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2782170 name raid_bdev1, state offline 00:12:03.442 18:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:03.442 18:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:12:03.700 18:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:12:03.700 18:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:12:03.700 18:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:03.700 18:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:03.958 18:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:03.958 18:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:04.216 18:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:12:04.216 18:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:04.473 18:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:12:04.473 18:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:12:04.473 18:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:12:04.473 18:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:12:04.474 18:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:04.474 18:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:04.474 18:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:04.474 18:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:04.474 18:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:04.474 18:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:04.474 18:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:04.474 18:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:12:04.474 18:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:12:04.474 [2024-07-15 18:26:50.017794] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:04.474 [2024-07-15 18:26:50.019226] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:04.474 [2024-07-15 18:26:50.019283] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:04.474 [2024-07-15 18:26:50.019320] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:04.474 [2024-07-15 18:26:50.019336] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:04.474 [2024-07-15 18:26:50.019344] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2783900 name raid_bdev1, state configuring 00:12:04.474 request: 00:12:04.474 { 00:12:04.474 "name": "raid_bdev1", 00:12:04.474 "raid_level": "raid1", 00:12:04.474 "base_bdevs": [ 00:12:04.474 "malloc1", 00:12:04.474 "malloc2" 00:12:04.474 ], 00:12:04.474 "superblock": false, 00:12:04.474 "method": "bdev_raid_create", 00:12:04.474 "req_id": 1 00:12:04.474 } 00:12:04.474 Got JSON-RPC error response 00:12:04.474 response: 00:12:04.474 { 00:12:04.474 "code": -17, 00:12:04.474 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:04.474 } 00:12:04.731 18:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:12:04.731 18:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:04.732 18:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:04.732 18:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:04.732 18:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:12:04.732 18:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:04.990 18:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:12:04.990 18:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:12:04.990 18:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:04.990 [2024-07-15 18:26:50.527105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:04.990 [2024-07-15 18:26:50.527149] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.990 [2024-07-15 18:26:50.527166] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25d7760 00:12:04.990 [2024-07-15 18:26:50.527175] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.990 [2024-07-15 18:26:50.528833] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.990 [2024-07-15 18:26:50.528857] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:04.990 [2024-07-15 18:26:50.528918] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:04.990 [2024-07-15 18:26:50.528942] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:04.990 pt1 00:12:05.249 18:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:05.249 18:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:05.249 18:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:05.249 18:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:05.249 18:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:05.249 18:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:05.249 18:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:05.249 18:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:05.249 18:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:05.249 18:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:05.249 18:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:05.249 18:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.509 18:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:05.509 "name": "raid_bdev1", 00:12:05.509 "uuid": "2c2b4153-f596-4462-b5be-288348069f4c", 00:12:05.509 "strip_size_kb": 0, 00:12:05.509 "state": "configuring", 00:12:05.509 "raid_level": "raid1", 00:12:05.509 "superblock": true, 00:12:05.509 "num_base_bdevs": 2, 00:12:05.509 "num_base_bdevs_discovered": 1, 00:12:05.509 "num_base_bdevs_operational": 2, 00:12:05.509 "base_bdevs_list": [ 00:12:05.509 { 00:12:05.509 "name": "pt1", 00:12:05.509 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:05.509 "is_configured": true, 00:12:05.509 "data_offset": 2048, 00:12:05.509 "data_size": 63488 00:12:05.509 }, 00:12:05.509 { 00:12:05.509 "name": null, 00:12:05.509 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:05.509 "is_configured": false, 00:12:05.509 "data_offset": 2048, 00:12:05.509 "data_size": 63488 00:12:05.509 } 00:12:05.509 ] 00:12:05.509 }' 00:12:05.509 18:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:05.509 18:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.076 18:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:12:06.076 18:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:12:06.076 18:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:06.076 18:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:06.335 [2024-07-15 18:26:51.698284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:06.335 [2024-07-15 18:26:51.698331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.335 [2024-07-15 18:26:51.698346] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25d9a40 00:12:06.335 [2024-07-15 18:26:51.698355] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.335 [2024-07-15 18:26:51.698691] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.335 [2024-07-15 18:26:51.698706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:06.335 [2024-07-15 18:26:51.698766] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:06.335 [2024-07-15 18:26:51.698783] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:06.335 [2024-07-15 18:26:51.698879] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x27854e0 00:12:06.335 [2024-07-15 18:26:51.698888] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:06.335 [2024-07-15 18:26:51.699076] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2788cd0 00:12:06.335 [2024-07-15 18:26:51.699209] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x27854e0 00:12:06.335 [2024-07-15 18:26:51.699218] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x27854e0 00:12:06.335 [2024-07-15 18:26:51.699320] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:06.335 pt2 00:12:06.335 18:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:12:06.335 18:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:06.335 18:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:06.335 18:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:06.335 18:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:06.335 18:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:06.335 18:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:06.335 18:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:06.335 18:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:06.335 18:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:06.335 18:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:06.335 18:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:06.335 18:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:06.335 18:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.593 18:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:06.593 "name": "raid_bdev1", 00:12:06.593 "uuid": "2c2b4153-f596-4462-b5be-288348069f4c", 00:12:06.593 "strip_size_kb": 0, 00:12:06.593 "state": "online", 00:12:06.593 "raid_level": "raid1", 00:12:06.593 "superblock": true, 00:12:06.593 "num_base_bdevs": 2, 00:12:06.593 "num_base_bdevs_discovered": 2, 00:12:06.593 "num_base_bdevs_operational": 2, 00:12:06.593 "base_bdevs_list": [ 00:12:06.593 { 00:12:06.593 "name": "pt1", 00:12:06.593 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:06.593 "is_configured": true, 00:12:06.593 "data_offset": 2048, 00:12:06.593 "data_size": 63488 00:12:06.593 }, 00:12:06.593 { 00:12:06.593 "name": "pt2", 00:12:06.593 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:06.593 "is_configured": true, 00:12:06.593 "data_offset": 2048, 00:12:06.593 "data_size": 63488 00:12:06.593 } 00:12:06.593 ] 00:12:06.593 }' 00:12:06.593 18:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:06.593 18:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.160 18:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:12:07.160 18:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:07.160 18:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:07.160 18:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:07.160 18:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:07.160 18:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:07.160 18:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:07.160 18:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:07.418 [2024-07-15 18:26:52.769443] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:07.418 18:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:07.418 "name": "raid_bdev1", 00:12:07.418 "aliases": [ 00:12:07.418 "2c2b4153-f596-4462-b5be-288348069f4c" 00:12:07.418 ], 00:12:07.418 "product_name": "Raid Volume", 00:12:07.418 "block_size": 512, 00:12:07.418 "num_blocks": 63488, 00:12:07.418 "uuid": "2c2b4153-f596-4462-b5be-288348069f4c", 00:12:07.418 "assigned_rate_limits": { 00:12:07.418 "rw_ios_per_sec": 0, 00:12:07.418 "rw_mbytes_per_sec": 0, 00:12:07.418 "r_mbytes_per_sec": 0, 00:12:07.418 "w_mbytes_per_sec": 0 00:12:07.418 }, 00:12:07.418 "claimed": false, 00:12:07.418 "zoned": false, 00:12:07.418 "supported_io_types": { 00:12:07.418 "read": true, 00:12:07.418 "write": true, 00:12:07.418 "unmap": false, 00:12:07.418 "flush": false, 00:12:07.418 "reset": true, 00:12:07.418 "nvme_admin": false, 00:12:07.418 "nvme_io": false, 00:12:07.418 "nvme_io_md": false, 00:12:07.418 "write_zeroes": true, 00:12:07.418 "zcopy": false, 00:12:07.418 "get_zone_info": false, 00:12:07.418 "zone_management": false, 00:12:07.418 "zone_append": false, 00:12:07.418 "compare": false, 00:12:07.418 "compare_and_write": false, 00:12:07.418 "abort": false, 00:12:07.418 "seek_hole": false, 00:12:07.418 "seek_data": false, 00:12:07.418 "copy": false, 00:12:07.418 "nvme_iov_md": false 00:12:07.418 }, 00:12:07.418 "memory_domains": [ 00:12:07.418 { 00:12:07.418 "dma_device_id": "system", 00:12:07.418 "dma_device_type": 1 00:12:07.418 }, 00:12:07.418 { 00:12:07.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.418 "dma_device_type": 2 00:12:07.418 }, 00:12:07.418 { 00:12:07.418 "dma_device_id": "system", 00:12:07.418 "dma_device_type": 1 00:12:07.418 }, 00:12:07.418 { 00:12:07.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.418 "dma_device_type": 2 00:12:07.418 } 00:12:07.418 ], 00:12:07.418 "driver_specific": { 00:12:07.418 "raid": { 00:12:07.418 "uuid": "2c2b4153-f596-4462-b5be-288348069f4c", 00:12:07.418 "strip_size_kb": 0, 00:12:07.418 "state": "online", 00:12:07.418 "raid_level": "raid1", 00:12:07.418 "superblock": true, 00:12:07.418 "num_base_bdevs": 2, 00:12:07.418 "num_base_bdevs_discovered": 2, 00:12:07.418 "num_base_bdevs_operational": 2, 00:12:07.418 "base_bdevs_list": [ 00:12:07.418 { 00:12:07.418 "name": "pt1", 00:12:07.419 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:07.419 "is_configured": true, 00:12:07.419 "data_offset": 2048, 00:12:07.419 "data_size": 63488 00:12:07.419 }, 00:12:07.419 { 00:12:07.419 "name": "pt2", 00:12:07.419 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:07.419 "is_configured": true, 00:12:07.419 "data_offset": 2048, 00:12:07.419 "data_size": 63488 00:12:07.419 } 00:12:07.419 ] 00:12:07.419 } 00:12:07.419 } 00:12:07.419 }' 00:12:07.419 18:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:07.419 18:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:07.419 pt2' 00:12:07.419 18:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:07.419 18:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:07.419 18:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:07.677 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:07.677 "name": "pt1", 00:12:07.677 "aliases": [ 00:12:07.677 "00000000-0000-0000-0000-000000000001" 00:12:07.677 ], 00:12:07.677 "product_name": "passthru", 00:12:07.677 "block_size": 512, 00:12:07.677 "num_blocks": 65536, 00:12:07.677 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:07.677 "assigned_rate_limits": { 00:12:07.677 "rw_ios_per_sec": 0, 00:12:07.677 "rw_mbytes_per_sec": 0, 00:12:07.677 "r_mbytes_per_sec": 0, 00:12:07.677 "w_mbytes_per_sec": 0 00:12:07.677 }, 00:12:07.677 "claimed": true, 00:12:07.677 "claim_type": "exclusive_write", 00:12:07.677 "zoned": false, 00:12:07.677 "supported_io_types": { 00:12:07.677 "read": true, 00:12:07.677 "write": true, 00:12:07.677 "unmap": true, 00:12:07.677 "flush": true, 00:12:07.677 "reset": true, 00:12:07.677 "nvme_admin": false, 00:12:07.677 "nvme_io": false, 00:12:07.677 "nvme_io_md": false, 00:12:07.677 "write_zeroes": true, 00:12:07.677 "zcopy": true, 00:12:07.677 "get_zone_info": false, 00:12:07.677 "zone_management": false, 00:12:07.677 "zone_append": false, 00:12:07.677 "compare": false, 00:12:07.677 "compare_and_write": false, 00:12:07.677 "abort": true, 00:12:07.677 "seek_hole": false, 00:12:07.677 "seek_data": false, 00:12:07.677 "copy": true, 00:12:07.677 "nvme_iov_md": false 00:12:07.677 }, 00:12:07.677 "memory_domains": [ 00:12:07.677 { 00:12:07.677 "dma_device_id": "system", 00:12:07.677 "dma_device_type": 1 00:12:07.677 }, 00:12:07.677 { 00:12:07.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.677 "dma_device_type": 2 00:12:07.677 } 00:12:07.677 ], 00:12:07.677 "driver_specific": { 00:12:07.677 "passthru": { 00:12:07.677 "name": "pt1", 00:12:07.677 "base_bdev_name": "malloc1" 00:12:07.677 } 00:12:07.677 } 00:12:07.677 }' 00:12:07.677 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:07.677 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:07.677 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:07.677 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:07.935 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:07.935 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:07.935 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:07.935 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:07.935 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:07.935 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:07.935 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:07.935 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:07.935 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:07.935 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:07.935 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:08.193 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:08.193 "name": "pt2", 00:12:08.193 "aliases": [ 00:12:08.193 "00000000-0000-0000-0000-000000000002" 00:12:08.193 ], 00:12:08.193 "product_name": "passthru", 00:12:08.193 "block_size": 512, 00:12:08.193 "num_blocks": 65536, 00:12:08.193 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:08.193 "assigned_rate_limits": { 00:12:08.193 "rw_ios_per_sec": 0, 00:12:08.193 "rw_mbytes_per_sec": 0, 00:12:08.193 "r_mbytes_per_sec": 0, 00:12:08.193 "w_mbytes_per_sec": 0 00:12:08.193 }, 00:12:08.193 "claimed": true, 00:12:08.193 "claim_type": "exclusive_write", 00:12:08.193 "zoned": false, 00:12:08.193 "supported_io_types": { 00:12:08.193 "read": true, 00:12:08.193 "write": true, 00:12:08.193 "unmap": true, 00:12:08.193 "flush": true, 00:12:08.193 "reset": true, 00:12:08.193 "nvme_admin": false, 00:12:08.193 "nvme_io": false, 00:12:08.193 "nvme_io_md": false, 00:12:08.193 "write_zeroes": true, 00:12:08.193 "zcopy": true, 00:12:08.193 "get_zone_info": false, 00:12:08.193 "zone_management": false, 00:12:08.193 "zone_append": false, 00:12:08.193 "compare": false, 00:12:08.193 "compare_and_write": false, 00:12:08.193 "abort": true, 00:12:08.193 "seek_hole": false, 00:12:08.193 "seek_data": false, 00:12:08.193 "copy": true, 00:12:08.193 "nvme_iov_md": false 00:12:08.193 }, 00:12:08.193 "memory_domains": [ 00:12:08.193 { 00:12:08.193 "dma_device_id": "system", 00:12:08.193 "dma_device_type": 1 00:12:08.193 }, 00:12:08.193 { 00:12:08.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.193 "dma_device_type": 2 00:12:08.193 } 00:12:08.193 ], 00:12:08.193 "driver_specific": { 00:12:08.193 "passthru": { 00:12:08.193 "name": "pt2", 00:12:08.193 "base_bdev_name": "malloc2" 00:12:08.193 } 00:12:08.193 } 00:12:08.193 }' 00:12:08.193 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:08.452 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:08.452 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:08.452 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:08.452 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:08.452 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:08.452 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:08.452 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:08.452 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:08.452 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:08.710 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:08.710 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:08.710 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:12:08.710 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:08.710 [2024-07-15 18:26:54.213322] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:08.710 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 2c2b4153-f596-4462-b5be-288348069f4c '!=' 2c2b4153-f596-4462-b5be-288348069f4c ']' 00:12:08.710 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:12:08.710 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:08.710 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:12:08.710 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:08.969 [2024-07-15 18:26:54.389544] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:08.969 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:08.969 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:08.969 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:08.969 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:08.969 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:08.969 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:12:08.969 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:08.969 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:08.969 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:08.969 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:08.969 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:08.969 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.227 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:09.227 "name": "raid_bdev1", 00:12:09.227 "uuid": "2c2b4153-f596-4462-b5be-288348069f4c", 00:12:09.227 "strip_size_kb": 0, 00:12:09.227 "state": "online", 00:12:09.227 "raid_level": "raid1", 00:12:09.227 "superblock": true, 00:12:09.227 "num_base_bdevs": 2, 00:12:09.227 "num_base_bdevs_discovered": 1, 00:12:09.227 "num_base_bdevs_operational": 1, 00:12:09.227 "base_bdevs_list": [ 00:12:09.227 { 00:12:09.227 "name": null, 00:12:09.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.227 "is_configured": false, 00:12:09.227 "data_offset": 2048, 00:12:09.227 "data_size": 63488 00:12:09.227 }, 00:12:09.227 { 00:12:09.227 "name": "pt2", 00:12:09.227 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:09.227 "is_configured": true, 00:12:09.227 "data_offset": 2048, 00:12:09.227 "data_size": 63488 00:12:09.227 } 00:12:09.227 ] 00:12:09.227 }' 00:12:09.227 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:09.227 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.794 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:10.052 [2024-07-15 18:26:55.572729] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:10.052 [2024-07-15 18:26:55.572754] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:10.052 [2024-07-15 18:26:55.572807] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:10.052 [2024-07-15 18:26:55.572848] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:10.052 [2024-07-15 18:26:55.572857] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x27854e0 name raid_bdev1, state offline 00:12:10.052 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:10.052 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:12:10.310 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:12:10.310 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:12:10.310 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:12:10.310 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:12:10.310 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:10.569 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:12:10.569 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:12:10.569 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:12:10.569 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:12:10.569 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=1 00:12:10.569 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:10.828 [2024-07-15 18:26:56.326710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:10.828 [2024-07-15 18:26:56.326758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.828 [2024-07-15 18:26:56.326775] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25d9050 00:12:10.828 [2024-07-15 18:26:56.326785] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.828 [2024-07-15 18:26:56.328451] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.828 [2024-07-15 18:26:56.328477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:10.828 [2024-07-15 18:26:56.328536] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:10.828 [2024-07-15 18:26:56.328561] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:10.828 [2024-07-15 18:26:56.328643] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2789f20 00:12:10.828 [2024-07-15 18:26:56.328652] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:10.828 [2024-07-15 18:26:56.328829] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25da420 00:12:10.828 [2024-07-15 18:26:56.328963] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2789f20 00:12:10.828 [2024-07-15 18:26:56.328972] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2789f20 00:12:10.828 [2024-07-15 18:26:56.329072] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.828 pt2 00:12:10.828 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:10.828 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:10.828 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:10.828 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:10.828 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:10.828 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:12:10.828 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:10.828 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:10.828 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:10.828 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:10.828 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:10.828 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.086 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:11.086 "name": "raid_bdev1", 00:12:11.086 "uuid": "2c2b4153-f596-4462-b5be-288348069f4c", 00:12:11.086 "strip_size_kb": 0, 00:12:11.086 "state": "online", 00:12:11.086 "raid_level": "raid1", 00:12:11.086 "superblock": true, 00:12:11.086 "num_base_bdevs": 2, 00:12:11.086 "num_base_bdevs_discovered": 1, 00:12:11.086 "num_base_bdevs_operational": 1, 00:12:11.086 "base_bdevs_list": [ 00:12:11.086 { 00:12:11.086 "name": null, 00:12:11.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.086 "is_configured": false, 00:12:11.086 "data_offset": 2048, 00:12:11.087 "data_size": 63488 00:12:11.087 }, 00:12:11.087 { 00:12:11.087 "name": "pt2", 00:12:11.087 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:11.087 "is_configured": true, 00:12:11.087 "data_offset": 2048, 00:12:11.087 "data_size": 63488 00:12:11.087 } 00:12:11.087 ] 00:12:11.087 }' 00:12:11.087 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:11.087 18:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.021 18:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:12.021 [2024-07-15 18:26:57.489844] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:12.021 [2024-07-15 18:26:57.489869] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:12.021 [2024-07-15 18:26:57.489920] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:12.021 [2024-07-15 18:26:57.489976] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:12.021 [2024-07-15 18:26:57.489986] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2789f20 name raid_bdev1, state offline 00:12:12.021 18:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:12:12.021 18:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:12.280 18:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:12:12.280 18:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:12:12.280 18:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:12:12.280 18:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:12.537 [2024-07-15 18:26:58.011210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:12.537 [2024-07-15 18:26:58.011253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.537 [2024-07-15 18:26:58.011268] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2783100 00:12:12.537 [2024-07-15 18:26:58.011278] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.537 [2024-07-15 18:26:58.012939] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.537 [2024-07-15 18:26:58.012974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:12.537 [2024-07-15 18:26:58.013033] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:12.537 [2024-07-15 18:26:58.013056] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:12.537 [2024-07-15 18:26:58.013157] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:12.537 [2024-07-15 18:26:58.013168] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:12.537 [2024-07-15 18:26:58.013179] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2787df0 name raid_bdev1, state configuring 00:12:12.537 [2024-07-15 18:26:58.013199] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:12.537 [2024-07-15 18:26:58.013257] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2788900 00:12:12.537 [2024-07-15 18:26:58.013266] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:12.537 [2024-07-15 18:26:58.013441] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2783be0 00:12:12.537 [2024-07-15 18:26:58.013564] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2788900 00:12:12.537 [2024-07-15 18:26:58.013573] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2788900 00:12:12.537 [2024-07-15 18:26:58.013673] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.537 pt1 00:12:12.537 18:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:12:12.537 18:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:12.537 18:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:12.537 18:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:12.537 18:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:12.537 18:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:12.537 18:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:12:12.537 18:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:12.537 18:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:12.537 18:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:12.537 18:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:12.537 18:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:12.537 18:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.795 18:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:12.795 "name": "raid_bdev1", 00:12:12.795 "uuid": "2c2b4153-f596-4462-b5be-288348069f4c", 00:12:12.795 "strip_size_kb": 0, 00:12:12.795 "state": "online", 00:12:12.795 "raid_level": "raid1", 00:12:12.795 "superblock": true, 00:12:12.795 "num_base_bdevs": 2, 00:12:12.795 "num_base_bdevs_discovered": 1, 00:12:12.795 "num_base_bdevs_operational": 1, 00:12:12.795 "base_bdevs_list": [ 00:12:12.795 { 00:12:12.795 "name": null, 00:12:12.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.795 "is_configured": false, 00:12:12.795 "data_offset": 2048, 00:12:12.795 "data_size": 63488 00:12:12.795 }, 00:12:12.795 { 00:12:12.795 "name": "pt2", 00:12:12.795 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:12.795 "is_configured": true, 00:12:12.795 "data_offset": 2048, 00:12:12.795 "data_size": 63488 00:12:12.795 } 00:12:12.795 ] 00:12:12.795 }' 00:12:12.795 18:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:12.795 18:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.421 18:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:12:13.421 18:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:13.679 18:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:12:13.679 18:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:13.679 18:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:12:13.937 [2024-07-15 18:26:59.443322] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:13.937 18:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 2c2b4153-f596-4462-b5be-288348069f4c '!=' 2c2b4153-f596-4462-b5be-288348069f4c ']' 00:12:13.937 18:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2776596 00:12:13.937 18:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2776596 ']' 00:12:13.937 18:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2776596 00:12:13.937 18:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:12:13.937 18:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:13.937 18:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2776596 00:12:14.196 18:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:14.196 18:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:14.196 18:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2776596' 00:12:14.196 killing process with pid 2776596 00:12:14.196 18:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2776596 00:12:14.196 [2024-07-15 18:26:59.509642] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:14.196 [2024-07-15 18:26:59.509692] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:14.196 [2024-07-15 18:26:59.509732] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:14.196 [2024-07-15 18:26:59.509741] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2788900 name raid_bdev1, state offline 00:12:14.196 18:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2776596 00:12:14.196 [2024-07-15 18:26:59.526238] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:14.196 18:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:12:14.196 00:12:14.196 real 0m16.164s 00:12:14.196 user 0m30.111s 00:12:14.196 sys 0m2.249s 00:12:14.196 18:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:14.196 18:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.196 ************************************ 00:12:14.196 END TEST raid_superblock_test 00:12:14.196 ************************************ 00:12:14.454 18:26:59 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:14.454 18:26:59 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:12:14.455 18:26:59 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:14.455 18:26:59 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:14.455 18:26:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:14.455 ************************************ 00:12:14.455 START TEST raid_read_error_test 00:12:14.455 ************************************ 00:12:14.455 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 read 00:12:14.455 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:12:14.455 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:12:14.455 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:12:14.455 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:12:14.455 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:14.455 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:12:14.455 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:14.455 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:14.455 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:12:14.455 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:14.455 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:14.455 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:14.455 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:12:14.455 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:12:14.455 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:12:14.455 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:12:14.455 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:12:14.455 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:12:14.455 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:12:14.455 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:12:14.455 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:12:14.455 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.h9xytPQaTk 00:12:14.455 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2779470 00:12:14.455 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2779470 /var/tmp/spdk-raid.sock 00:12:14.455 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:14.455 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2779470 ']' 00:12:14.455 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:14.455 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:14.455 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:14.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:14.455 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:14.455 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.455 [2024-07-15 18:26:59.830094] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:12:14.455 [2024-07-15 18:26:59.830153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2779470 ] 00:12:14.455 [2024-07-15 18:26:59.926481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.712 [2024-07-15 18:27:00.024516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.712 [2024-07-15 18:27:00.080454] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:14.712 [2024-07-15 18:27:00.080480] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.280 18:27:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:15.280 18:27:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:12:15.280 18:27:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:15.280 18:27:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:15.539 BaseBdev1_malloc 00:12:15.539 18:27:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:15.799 true 00:12:15.799 18:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:16.057 [2024-07-15 18:27:01.486827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:16.057 [2024-07-15 18:27:01.486866] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.057 [2024-07-15 18:27:01.486884] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x248fd20 00:12:16.057 [2024-07-15 18:27:01.486893] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.057 [2024-07-15 18:27:01.488674] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.057 [2024-07-15 18:27:01.488700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:16.057 BaseBdev1 00:12:16.057 18:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:16.057 18:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:16.338 BaseBdev2_malloc 00:12:16.338 18:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:16.604 true 00:12:16.604 18:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:16.863 [2024-07-15 18:27:02.245353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:16.863 [2024-07-15 18:27:02.245395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.863 [2024-07-15 18:27:02.245415] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2494d50 00:12:16.863 [2024-07-15 18:27:02.245424] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.863 [2024-07-15 18:27:02.247093] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.863 [2024-07-15 18:27:02.247119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:16.863 BaseBdev2 00:12:16.863 18:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:12:17.121 [2024-07-15 18:27:02.490028] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:17.121 [2024-07-15 18:27:02.491380] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:17.121 [2024-07-15 18:27:02.491568] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x24960e0 00:12:17.121 [2024-07-15 18:27:02.491581] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:17.121 [2024-07-15 18:27:02.491774] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x249e7d0 00:12:17.121 [2024-07-15 18:27:02.491932] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x24960e0 00:12:17.121 [2024-07-15 18:27:02.491940] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x24960e0 00:12:17.121 [2024-07-15 18:27:02.492058] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.121 18:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:17.121 18:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:17.121 18:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:17.121 18:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:17.121 18:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:17.121 18:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:17.121 18:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:17.121 18:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:17.121 18:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:17.121 18:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:17.121 18:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:17.121 18:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.381 18:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:17.381 "name": "raid_bdev1", 00:12:17.381 "uuid": "0de51d69-f20e-45e3-aa3e-9fdde4f65c33", 00:12:17.381 "strip_size_kb": 0, 00:12:17.381 "state": "online", 00:12:17.381 "raid_level": "raid1", 00:12:17.381 "superblock": true, 00:12:17.381 "num_base_bdevs": 2, 00:12:17.381 "num_base_bdevs_discovered": 2, 00:12:17.381 "num_base_bdevs_operational": 2, 00:12:17.381 "base_bdevs_list": [ 00:12:17.381 { 00:12:17.381 "name": "BaseBdev1", 00:12:17.381 "uuid": "4d0ed5ae-5dcc-5274-84d2-0f06a48bcb9b", 00:12:17.381 "is_configured": true, 00:12:17.382 "data_offset": 2048, 00:12:17.382 "data_size": 63488 00:12:17.382 }, 00:12:17.382 { 00:12:17.382 "name": "BaseBdev2", 00:12:17.382 "uuid": "625923e3-994a-5501-863e-3f0452ad1d6a", 00:12:17.382 "is_configured": true, 00:12:17.382 "data_offset": 2048, 00:12:17.382 "data_size": 63488 00:12:17.382 } 00:12:17.382 ] 00:12:17.382 }' 00:12:17.382 18:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:17.382 18:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.949 18:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:12:17.949 18:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:18.208 [2024-07-15 18:27:03.537133] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2491bc0 00:12:19.146 18:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:19.146 18:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:12:19.146 18:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:19.146 18:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:12:19.146 18:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:12:19.146 18:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:19.146 18:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:19.146 18:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:19.146 18:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:19.146 18:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:19.146 18:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:19.146 18:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:19.146 18:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:19.146 18:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:19.146 18:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:19.146 18:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:19.146 18:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.404 18:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:19.404 "name": "raid_bdev1", 00:12:19.404 "uuid": "0de51d69-f20e-45e3-aa3e-9fdde4f65c33", 00:12:19.404 "strip_size_kb": 0, 00:12:19.404 "state": "online", 00:12:19.404 "raid_level": "raid1", 00:12:19.404 "superblock": true, 00:12:19.404 "num_base_bdevs": 2, 00:12:19.404 "num_base_bdevs_discovered": 2, 00:12:19.404 "num_base_bdevs_operational": 2, 00:12:19.404 "base_bdevs_list": [ 00:12:19.404 { 00:12:19.404 "name": "BaseBdev1", 00:12:19.404 "uuid": "4d0ed5ae-5dcc-5274-84d2-0f06a48bcb9b", 00:12:19.404 "is_configured": true, 00:12:19.404 "data_offset": 2048, 00:12:19.404 "data_size": 63488 00:12:19.404 }, 00:12:19.404 { 00:12:19.404 "name": "BaseBdev2", 00:12:19.404 "uuid": "625923e3-994a-5501-863e-3f0452ad1d6a", 00:12:19.404 "is_configured": true, 00:12:19.404 "data_offset": 2048, 00:12:19.404 "data_size": 63488 00:12:19.404 } 00:12:19.404 ] 00:12:19.404 }' 00:12:19.404 18:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:19.404 18:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.340 18:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:20.598 [2024-07-15 18:27:06.051500] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:20.598 [2024-07-15 18:27:06.051539] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:20.599 [2024-07-15 18:27:06.054905] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:20.599 [2024-07-15 18:27:06.054935] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:20.599 [2024-07-15 18:27:06.055019] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:20.599 [2024-07-15 18:27:06.055029] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x24960e0 name raid_bdev1, state offline 00:12:20.599 0 00:12:20.599 18:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2779470 00:12:20.599 18:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2779470 ']' 00:12:20.599 18:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2779470 00:12:20.599 18:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:12:20.599 18:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:20.599 18:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2779470 00:12:20.599 18:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:20.599 18:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:20.599 18:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2779470' 00:12:20.599 killing process with pid 2779470 00:12:20.599 18:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2779470 00:12:20.599 [2024-07-15 18:27:06.119502] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:20.599 18:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2779470 00:12:20.599 [2024-07-15 18:27:06.129838] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:20.858 18:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.h9xytPQaTk 00:12:20.858 18:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:12:20.858 18:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:12:20.858 18:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:12:20.858 18:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:12:20.858 18:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:20.858 18:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:12:20.858 18:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:20.858 00:12:20.858 real 0m6.585s 00:12:20.858 user 0m10.704s 00:12:20.858 sys 0m0.883s 00:12:20.858 18:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:20.858 18:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.858 ************************************ 00:12:20.858 END TEST raid_read_error_test 00:12:20.858 ************************************ 00:12:20.858 18:27:06 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:20.858 18:27:06 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:12:20.858 18:27:06 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:20.858 18:27:06 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:20.858 18:27:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:20.858 ************************************ 00:12:20.858 START TEST raid_write_error_test 00:12:20.858 ************************************ 00:12:20.858 18:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 write 00:12:20.858 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:12:20.858 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:12:20.858 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:12:20.858 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:12:20.858 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:20.858 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:12:20.858 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:20.858 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:20.858 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:12:20.858 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:20.858 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:20.858 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:20.858 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:12:20.858 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:12:20.858 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:12:20.858 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:12:20.858 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:12:20.858 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:12:20.858 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:12:20.858 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:12:20.858 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:12:20.858 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.FrH8NbiHVN 00:12:20.858 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2780723 00:12:20.858 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2780723 /var/tmp/spdk-raid.sock 00:12:20.858 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:20.858 18:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2780723 ']' 00:12:20.858 18:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:20.858 18:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:20.858 18:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:20.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:20.858 18:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:20.858 18:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.117 [2024-07-15 18:27:06.460461] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:12:21.117 [2024-07-15 18:27:06.460526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2780723 ] 00:12:21.117 [2024-07-15 18:27:06.561868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.117 [2024-07-15 18:27:06.652877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.376 [2024-07-15 18:27:06.716181] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.376 [2024-07-15 18:27:06.716216] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.945 18:27:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:21.945 18:27:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:12:21.945 18:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:21.945 18:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:22.204 BaseBdev1_malloc 00:12:22.204 18:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:22.464 true 00:12:22.464 18:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:22.723 [2024-07-15 18:27:08.173932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:22.723 [2024-07-15 18:27:08.173979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.723 [2024-07-15 18:27:08.173998] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x129ed20 00:12:22.723 [2024-07-15 18:27:08.174007] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.723 [2024-07-15 18:27:08.175701] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.723 [2024-07-15 18:27:08.175727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:22.723 BaseBdev1 00:12:22.723 18:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:22.723 18:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:22.982 BaseBdev2_malloc 00:12:22.982 18:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:23.241 true 00:12:23.241 18:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:23.499 [2024-07-15 18:27:08.948471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:23.499 [2024-07-15 18:27:08.948517] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.499 [2024-07-15 18:27:08.948536] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12a3d50 00:12:23.499 [2024-07-15 18:27:08.948545] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.499 [2024-07-15 18:27:08.950081] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.499 [2024-07-15 18:27:08.950107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:23.499 BaseBdev2 00:12:23.499 18:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:12:23.758 [2024-07-15 18:27:09.209200] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:23.758 [2024-07-15 18:27:09.210586] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:23.758 [2024-07-15 18:27:09.210773] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x12a50e0 00:12:23.758 [2024-07-15 18:27:09.210786] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:23.758 [2024-07-15 18:27:09.210993] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12ad7d0 00:12:23.758 [2024-07-15 18:27:09.211153] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x12a50e0 00:12:23.758 [2024-07-15 18:27:09.211162] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x12a50e0 00:12:23.758 [2024-07-15 18:27:09.211269] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.758 18:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:23.758 18:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:23.758 18:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:23.758 18:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:23.758 18:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:23.758 18:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:23.758 18:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:23.758 18:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:23.758 18:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:23.758 18:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:23.758 18:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:23.758 18:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.018 18:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:24.018 "name": "raid_bdev1", 00:12:24.018 "uuid": "992ee555-6e30-4522-99de-1201bd7b8eef", 00:12:24.018 "strip_size_kb": 0, 00:12:24.018 "state": "online", 00:12:24.018 "raid_level": "raid1", 00:12:24.018 "superblock": true, 00:12:24.018 "num_base_bdevs": 2, 00:12:24.018 "num_base_bdevs_discovered": 2, 00:12:24.018 "num_base_bdevs_operational": 2, 00:12:24.018 "base_bdevs_list": [ 00:12:24.018 { 00:12:24.018 "name": "BaseBdev1", 00:12:24.018 "uuid": "fce851ed-1d75-52ac-b4a6-5f2218d15d0b", 00:12:24.018 "is_configured": true, 00:12:24.018 "data_offset": 2048, 00:12:24.018 "data_size": 63488 00:12:24.018 }, 00:12:24.018 { 00:12:24.018 "name": "BaseBdev2", 00:12:24.018 "uuid": "860cb1a1-ce3b-5527-916b-ff19668912ac", 00:12:24.018 "is_configured": true, 00:12:24.018 "data_offset": 2048, 00:12:24.018 "data_size": 63488 00:12:24.018 } 00:12:24.018 ] 00:12:24.018 }' 00:12:24.018 18:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:24.018 18:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.964 18:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:12:24.964 18:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:24.964 [2024-07-15 18:27:10.320472] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12a0bc0 00:12:25.902 18:27:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:25.902 [2024-07-15 18:27:11.444450] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:25.902 [2024-07-15 18:27:11.444509] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:25.902 [2024-07-15 18:27:11.444687] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x12a0bc0 00:12:26.161 18:27:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:12:26.161 18:27:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:26.161 18:27:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:12:26.161 18:27:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=1 00:12:26.161 18:27:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:26.161 18:27:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:26.161 18:27:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:26.161 18:27:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:26.161 18:27:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:26.161 18:27:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:12:26.161 18:27:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:26.161 18:27:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:26.161 18:27:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:26.161 18:27:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:26.161 18:27:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:26.161 18:27:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.730 18:27:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:26.730 "name": "raid_bdev1", 00:12:26.730 "uuid": "992ee555-6e30-4522-99de-1201bd7b8eef", 00:12:26.730 "strip_size_kb": 0, 00:12:26.730 "state": "online", 00:12:26.730 "raid_level": "raid1", 00:12:26.730 "superblock": true, 00:12:26.730 "num_base_bdevs": 2, 00:12:26.730 "num_base_bdevs_discovered": 1, 00:12:26.730 "num_base_bdevs_operational": 1, 00:12:26.730 "base_bdevs_list": [ 00:12:26.730 { 00:12:26.730 "name": null, 00:12:26.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.730 "is_configured": false, 00:12:26.730 "data_offset": 2048, 00:12:26.730 "data_size": 63488 00:12:26.730 }, 00:12:26.730 { 00:12:26.730 "name": "BaseBdev2", 00:12:26.730 "uuid": "860cb1a1-ce3b-5527-916b-ff19668912ac", 00:12:26.730 "is_configured": true, 00:12:26.730 "data_offset": 2048, 00:12:26.730 "data_size": 63488 00:12:26.730 } 00:12:26.730 ] 00:12:26.730 }' 00:12:26.730 18:27:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:26.730 18:27:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.298 18:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:27.298 [2024-07-15 18:27:12.843149] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:27.298 [2024-07-15 18:27:12.843188] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:27.298 [2024-07-15 18:27:12.846557] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:27.298 [2024-07-15 18:27:12.846585] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.298 [2024-07-15 18:27:12.846642] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:27.298 [2024-07-15 18:27:12.846651] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12a50e0 name raid_bdev1, state offline 00:12:27.298 0 00:12:27.557 18:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2780723 00:12:27.557 18:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2780723 ']' 00:12:27.557 18:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2780723 00:12:27.557 18:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:12:27.557 18:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:27.557 18:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2780723 00:12:27.557 18:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:27.557 18:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:27.557 18:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2780723' 00:12:27.557 killing process with pid 2780723 00:12:27.557 18:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2780723 00:12:27.557 [2024-07-15 18:27:12.912300] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:27.557 18:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2780723 00:12:27.557 [2024-07-15 18:27:12.922114] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:27.816 18:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.FrH8NbiHVN 00:12:27.816 18:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:12:27.816 18:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:12:27.816 18:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:12:27.816 18:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:12:27.816 18:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:27.816 18:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:12:27.816 18:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:27.816 00:12:27.816 real 0m6.747s 00:12:27.816 user 0m11.012s 00:12:27.816 sys 0m0.868s 00:12:27.816 18:27:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:27.816 18:27:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.816 ************************************ 00:12:27.816 END TEST raid_write_error_test 00:12:27.816 ************************************ 00:12:27.816 18:27:13 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:27.816 18:27:13 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:12:27.816 18:27:13 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:12:27.816 18:27:13 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:12:27.816 18:27:13 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:27.816 18:27:13 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:27.816 18:27:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:27.816 ************************************ 00:12:27.816 START TEST raid_state_function_test 00:12:27.816 ************************************ 00:12:27.816 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 false 00:12:27.816 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:12:27.816 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:12:27.816 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:12:27.816 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:27.816 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:27.816 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:27.816 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:27.816 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:27.816 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:27.816 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:27.816 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:27.816 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:27.816 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:12:27.816 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:27.816 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:27.816 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:27.816 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:27.816 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:27.816 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:27.816 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:27.816 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:27.816 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:12:27.816 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:12:27.816 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:12:27.816 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:12:27.816 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:12:27.816 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2782226 00:12:27.816 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2782226' 00:12:27.816 Process raid pid: 2782226 00:12:27.816 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:27.817 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2782226 /var/tmp/spdk-raid.sock 00:12:27.817 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2782226 ']' 00:12:27.817 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:27.817 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:27.817 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:27.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:27.817 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:27.817 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.817 [2024-07-15 18:27:13.245198] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:12:27.817 [2024-07-15 18:27:13.245262] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.817 [2024-07-15 18:27:13.345046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.075 [2024-07-15 18:27:13.436585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.075 [2024-07-15 18:27:13.494980] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:28.075 [2024-07-15 18:27:13.495012] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:28.661 18:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:28.661 18:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:12:28.661 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:28.920 [2024-07-15 18:27:14.353545] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:28.920 [2024-07-15 18:27:14.353582] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:28.920 [2024-07-15 18:27:14.353591] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:28.920 [2024-07-15 18:27:14.353600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:28.920 [2024-07-15 18:27:14.353609] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:28.920 [2024-07-15 18:27:14.353617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:28.920 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:28.920 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:28.920 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:28.920 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:28.920 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:28.920 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:28.920 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:28.920 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:28.920 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:28.920 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:28.920 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:28.920 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.178 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:29.178 "name": "Existed_Raid", 00:12:29.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.178 "strip_size_kb": 64, 00:12:29.178 "state": "configuring", 00:12:29.178 "raid_level": "raid0", 00:12:29.178 "superblock": false, 00:12:29.178 "num_base_bdevs": 3, 00:12:29.178 "num_base_bdevs_discovered": 0, 00:12:29.178 "num_base_bdevs_operational": 3, 00:12:29.178 "base_bdevs_list": [ 00:12:29.178 { 00:12:29.178 "name": "BaseBdev1", 00:12:29.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.178 "is_configured": false, 00:12:29.178 "data_offset": 0, 00:12:29.178 "data_size": 0 00:12:29.178 }, 00:12:29.178 { 00:12:29.178 "name": "BaseBdev2", 00:12:29.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.178 "is_configured": false, 00:12:29.178 "data_offset": 0, 00:12:29.178 "data_size": 0 00:12:29.178 }, 00:12:29.178 { 00:12:29.178 "name": "BaseBdev3", 00:12:29.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.178 "is_configured": false, 00:12:29.178 "data_offset": 0, 00:12:29.178 "data_size": 0 00:12:29.178 } 00:12:29.178 ] 00:12:29.178 }' 00:12:29.178 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:29.178 18:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.743 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:30.001 [2024-07-15 18:27:15.400229] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:30.001 [2024-07-15 18:27:15.400260] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x968ba0 name Existed_Raid, state configuring 00:12:30.001 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:30.259 [2024-07-15 18:27:15.572706] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:30.259 [2024-07-15 18:27:15.572732] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:30.259 [2024-07-15 18:27:15.572740] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:30.259 [2024-07-15 18:27:15.572748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:30.259 [2024-07-15 18:27:15.572755] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:30.259 [2024-07-15 18:27:15.572762] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:30.259 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:30.259 [2024-07-15 18:27:15.758781] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:30.259 BaseBdev1 00:12:30.259 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:30.259 18:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:30.259 18:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:30.259 18:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:30.259 18:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:30.259 18:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:30.259 18:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:30.517 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:30.845 [ 00:12:30.845 { 00:12:30.845 "name": "BaseBdev1", 00:12:30.845 "aliases": [ 00:12:30.845 "83a5047a-4259-4732-99f2-92a06adbc677" 00:12:30.845 ], 00:12:30.845 "product_name": "Malloc disk", 00:12:30.845 "block_size": 512, 00:12:30.845 "num_blocks": 65536, 00:12:30.845 "uuid": "83a5047a-4259-4732-99f2-92a06adbc677", 00:12:30.845 "assigned_rate_limits": { 00:12:30.845 "rw_ios_per_sec": 0, 00:12:30.845 "rw_mbytes_per_sec": 0, 00:12:30.845 "r_mbytes_per_sec": 0, 00:12:30.845 "w_mbytes_per_sec": 0 00:12:30.845 }, 00:12:30.845 "claimed": true, 00:12:30.846 "claim_type": "exclusive_write", 00:12:30.846 "zoned": false, 00:12:30.846 "supported_io_types": { 00:12:30.846 "read": true, 00:12:30.846 "write": true, 00:12:30.846 "unmap": true, 00:12:30.846 "flush": true, 00:12:30.846 "reset": true, 00:12:30.846 "nvme_admin": false, 00:12:30.846 "nvme_io": false, 00:12:30.846 "nvme_io_md": false, 00:12:30.846 "write_zeroes": true, 00:12:30.846 "zcopy": true, 00:12:30.846 "get_zone_info": false, 00:12:30.846 "zone_management": false, 00:12:30.846 "zone_append": false, 00:12:30.846 "compare": false, 00:12:30.846 "compare_and_write": false, 00:12:30.846 "abort": true, 00:12:30.846 "seek_hole": false, 00:12:30.846 "seek_data": false, 00:12:30.846 "copy": true, 00:12:30.846 "nvme_iov_md": false 00:12:30.846 }, 00:12:30.846 "memory_domains": [ 00:12:30.846 { 00:12:30.846 "dma_device_id": "system", 00:12:30.846 "dma_device_type": 1 00:12:30.846 }, 00:12:30.846 { 00:12:30.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.846 "dma_device_type": 2 00:12:30.846 } 00:12:30.846 ], 00:12:30.846 "driver_specific": {} 00:12:30.846 } 00:12:30.846 ] 00:12:30.846 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:30.846 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:30.846 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:30.846 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:30.846 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:30.846 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:30.846 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:30.846 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:30.846 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:30.846 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:30.846 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:30.846 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:30.846 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.120 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:31.120 "name": "Existed_Raid", 00:12:31.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.120 "strip_size_kb": 64, 00:12:31.120 "state": "configuring", 00:12:31.120 "raid_level": "raid0", 00:12:31.120 "superblock": false, 00:12:31.120 "num_base_bdevs": 3, 00:12:31.120 "num_base_bdevs_discovered": 1, 00:12:31.120 "num_base_bdevs_operational": 3, 00:12:31.120 "base_bdevs_list": [ 00:12:31.120 { 00:12:31.120 "name": "BaseBdev1", 00:12:31.120 "uuid": "83a5047a-4259-4732-99f2-92a06adbc677", 00:12:31.120 "is_configured": true, 00:12:31.120 "data_offset": 0, 00:12:31.120 "data_size": 65536 00:12:31.120 }, 00:12:31.120 { 00:12:31.120 "name": "BaseBdev2", 00:12:31.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.120 "is_configured": false, 00:12:31.120 "data_offset": 0, 00:12:31.120 "data_size": 0 00:12:31.120 }, 00:12:31.120 { 00:12:31.120 "name": "BaseBdev3", 00:12:31.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.120 "is_configured": false, 00:12:31.120 "data_offset": 0, 00:12:31.120 "data_size": 0 00:12:31.120 } 00:12:31.120 ] 00:12:31.120 }' 00:12:31.120 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:31.120 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.686 18:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:31.944 [2024-07-15 18:27:17.286896] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:31.944 [2024-07-15 18:27:17.286936] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x968470 name Existed_Raid, state configuring 00:12:31.944 18:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:32.202 [2024-07-15 18:27:17.547631] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:32.202 [2024-07-15 18:27:17.549146] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:32.202 [2024-07-15 18:27:17.549178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:32.202 [2024-07-15 18:27:17.549186] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:32.202 [2024-07-15 18:27:17.549194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:32.202 18:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:32.202 18:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:32.202 18:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:32.202 18:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:32.202 18:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:32.202 18:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:32.202 18:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:32.202 18:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:32.202 18:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:32.202 18:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:32.202 18:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:32.202 18:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:32.202 18:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:32.202 18:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.460 18:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:32.460 "name": "Existed_Raid", 00:12:32.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.460 "strip_size_kb": 64, 00:12:32.460 "state": "configuring", 00:12:32.460 "raid_level": "raid0", 00:12:32.460 "superblock": false, 00:12:32.460 "num_base_bdevs": 3, 00:12:32.460 "num_base_bdevs_discovered": 1, 00:12:32.460 "num_base_bdevs_operational": 3, 00:12:32.460 "base_bdevs_list": [ 00:12:32.460 { 00:12:32.460 "name": "BaseBdev1", 00:12:32.460 "uuid": "83a5047a-4259-4732-99f2-92a06adbc677", 00:12:32.460 "is_configured": true, 00:12:32.460 "data_offset": 0, 00:12:32.460 "data_size": 65536 00:12:32.460 }, 00:12:32.460 { 00:12:32.460 "name": "BaseBdev2", 00:12:32.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.460 "is_configured": false, 00:12:32.460 "data_offset": 0, 00:12:32.460 "data_size": 0 00:12:32.460 }, 00:12:32.460 { 00:12:32.460 "name": "BaseBdev3", 00:12:32.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.460 "is_configured": false, 00:12:32.460 "data_offset": 0, 00:12:32.460 "data_size": 0 00:12:32.460 } 00:12:32.460 ] 00:12:32.460 }' 00:12:32.460 18:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:32.460 18:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.026 18:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:33.285 [2024-07-15 18:27:18.694105] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:33.285 BaseBdev2 00:12:33.285 18:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:33.285 18:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:33.285 18:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:33.285 18:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:33.285 18:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:33.285 18:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:33.285 18:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:33.543 18:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:33.803 [ 00:12:33.803 { 00:12:33.803 "name": "BaseBdev2", 00:12:33.803 "aliases": [ 00:12:33.803 "3f27c030-cfae-4028-93cd-472a6707c182" 00:12:33.803 ], 00:12:33.803 "product_name": "Malloc disk", 00:12:33.803 "block_size": 512, 00:12:33.803 "num_blocks": 65536, 00:12:33.803 "uuid": "3f27c030-cfae-4028-93cd-472a6707c182", 00:12:33.803 "assigned_rate_limits": { 00:12:33.803 "rw_ios_per_sec": 0, 00:12:33.803 "rw_mbytes_per_sec": 0, 00:12:33.803 "r_mbytes_per_sec": 0, 00:12:33.803 "w_mbytes_per_sec": 0 00:12:33.803 }, 00:12:33.803 "claimed": true, 00:12:33.803 "claim_type": "exclusive_write", 00:12:33.803 "zoned": false, 00:12:33.803 "supported_io_types": { 00:12:33.803 "read": true, 00:12:33.803 "write": true, 00:12:33.803 "unmap": true, 00:12:33.803 "flush": true, 00:12:33.803 "reset": true, 00:12:33.803 "nvme_admin": false, 00:12:33.803 "nvme_io": false, 00:12:33.803 "nvme_io_md": false, 00:12:33.803 "write_zeroes": true, 00:12:33.803 "zcopy": true, 00:12:33.803 "get_zone_info": false, 00:12:33.803 "zone_management": false, 00:12:33.803 "zone_append": false, 00:12:33.803 "compare": false, 00:12:33.803 "compare_and_write": false, 00:12:33.803 "abort": true, 00:12:33.803 "seek_hole": false, 00:12:33.803 "seek_data": false, 00:12:33.803 "copy": true, 00:12:33.803 "nvme_iov_md": false 00:12:33.803 }, 00:12:33.803 "memory_domains": [ 00:12:33.803 { 00:12:33.803 "dma_device_id": "system", 00:12:33.803 "dma_device_type": 1 00:12:33.803 }, 00:12:33.803 { 00:12:33.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.803 "dma_device_type": 2 00:12:33.803 } 00:12:33.803 ], 00:12:33.803 "driver_specific": {} 00:12:33.803 } 00:12:33.803 ] 00:12:33.803 18:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:33.803 18:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:33.803 18:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:33.803 18:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:33.803 18:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:33.803 18:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:33.803 18:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:33.803 18:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:33.803 18:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:33.803 18:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:33.803 18:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:33.803 18:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:33.803 18:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:33.803 18:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:33.803 18:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.061 18:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:34.061 "name": "Existed_Raid", 00:12:34.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.061 "strip_size_kb": 64, 00:12:34.061 "state": "configuring", 00:12:34.061 "raid_level": "raid0", 00:12:34.061 "superblock": false, 00:12:34.061 "num_base_bdevs": 3, 00:12:34.061 "num_base_bdevs_discovered": 2, 00:12:34.061 "num_base_bdevs_operational": 3, 00:12:34.061 "base_bdevs_list": [ 00:12:34.061 { 00:12:34.061 "name": "BaseBdev1", 00:12:34.061 "uuid": "83a5047a-4259-4732-99f2-92a06adbc677", 00:12:34.061 "is_configured": true, 00:12:34.061 "data_offset": 0, 00:12:34.061 "data_size": 65536 00:12:34.061 }, 00:12:34.061 { 00:12:34.061 "name": "BaseBdev2", 00:12:34.061 "uuid": "3f27c030-cfae-4028-93cd-472a6707c182", 00:12:34.061 "is_configured": true, 00:12:34.061 "data_offset": 0, 00:12:34.061 "data_size": 65536 00:12:34.061 }, 00:12:34.061 { 00:12:34.061 "name": "BaseBdev3", 00:12:34.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.061 "is_configured": false, 00:12:34.061 "data_offset": 0, 00:12:34.062 "data_size": 0 00:12:34.062 } 00:12:34.062 ] 00:12:34.062 }' 00:12:34.062 18:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:34.062 18:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.627 18:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:34.886 [2024-07-15 18:27:20.265660] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:34.886 [2024-07-15 18:27:20.265698] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x969360 00:12:34.886 [2024-07-15 18:27:20.265705] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:34.886 [2024-07-15 18:27:20.265901] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb115f0 00:12:34.886 [2024-07-15 18:27:20.266036] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x969360 00:12:34.886 [2024-07-15 18:27:20.266045] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x969360 00:12:34.886 [2024-07-15 18:27:20.266215] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.886 BaseBdev3 00:12:34.886 18:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:12:34.886 18:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:12:34.886 18:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:34.886 18:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:34.886 18:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:34.886 18:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:34.886 18:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:35.145 18:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:35.404 [ 00:12:35.404 { 00:12:35.404 "name": "BaseBdev3", 00:12:35.404 "aliases": [ 00:12:35.404 "68d66029-150a-45ed-ac7f-7b61feb0ced4" 00:12:35.404 ], 00:12:35.404 "product_name": "Malloc disk", 00:12:35.404 "block_size": 512, 00:12:35.404 "num_blocks": 65536, 00:12:35.404 "uuid": "68d66029-150a-45ed-ac7f-7b61feb0ced4", 00:12:35.404 "assigned_rate_limits": { 00:12:35.404 "rw_ios_per_sec": 0, 00:12:35.404 "rw_mbytes_per_sec": 0, 00:12:35.404 "r_mbytes_per_sec": 0, 00:12:35.404 "w_mbytes_per_sec": 0 00:12:35.404 }, 00:12:35.404 "claimed": true, 00:12:35.404 "claim_type": "exclusive_write", 00:12:35.404 "zoned": false, 00:12:35.404 "supported_io_types": { 00:12:35.404 "read": true, 00:12:35.404 "write": true, 00:12:35.404 "unmap": true, 00:12:35.404 "flush": true, 00:12:35.404 "reset": true, 00:12:35.404 "nvme_admin": false, 00:12:35.404 "nvme_io": false, 00:12:35.404 "nvme_io_md": false, 00:12:35.404 "write_zeroes": true, 00:12:35.404 "zcopy": true, 00:12:35.404 "get_zone_info": false, 00:12:35.404 "zone_management": false, 00:12:35.404 "zone_append": false, 00:12:35.404 "compare": false, 00:12:35.404 "compare_and_write": false, 00:12:35.404 "abort": true, 00:12:35.404 "seek_hole": false, 00:12:35.404 "seek_data": false, 00:12:35.404 "copy": true, 00:12:35.404 "nvme_iov_md": false 00:12:35.404 }, 00:12:35.404 "memory_domains": [ 00:12:35.404 { 00:12:35.404 "dma_device_id": "system", 00:12:35.404 "dma_device_type": 1 00:12:35.404 }, 00:12:35.404 { 00:12:35.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.404 "dma_device_type": 2 00:12:35.404 } 00:12:35.404 ], 00:12:35.404 "driver_specific": {} 00:12:35.404 } 00:12:35.404 ] 00:12:35.404 18:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:35.404 18:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:35.404 18:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:35.404 18:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:35.404 18:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:35.404 18:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:35.404 18:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:35.404 18:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:35.404 18:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:35.404 18:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:35.404 18:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:35.404 18:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:35.404 18:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:35.404 18:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:35.404 18:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.662 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:35.662 "name": "Existed_Raid", 00:12:35.662 "uuid": "cc72f8dc-9ca8-4ae8-a367-d99e6c17070d", 00:12:35.662 "strip_size_kb": 64, 00:12:35.662 "state": "online", 00:12:35.662 "raid_level": "raid0", 00:12:35.662 "superblock": false, 00:12:35.662 "num_base_bdevs": 3, 00:12:35.662 "num_base_bdevs_discovered": 3, 00:12:35.662 "num_base_bdevs_operational": 3, 00:12:35.662 "base_bdevs_list": [ 00:12:35.662 { 00:12:35.662 "name": "BaseBdev1", 00:12:35.662 "uuid": "83a5047a-4259-4732-99f2-92a06adbc677", 00:12:35.662 "is_configured": true, 00:12:35.662 "data_offset": 0, 00:12:35.662 "data_size": 65536 00:12:35.662 }, 00:12:35.662 { 00:12:35.662 "name": "BaseBdev2", 00:12:35.662 "uuid": "3f27c030-cfae-4028-93cd-472a6707c182", 00:12:35.662 "is_configured": true, 00:12:35.662 "data_offset": 0, 00:12:35.662 "data_size": 65536 00:12:35.662 }, 00:12:35.662 { 00:12:35.662 "name": "BaseBdev3", 00:12:35.662 "uuid": "68d66029-150a-45ed-ac7f-7b61feb0ced4", 00:12:35.662 "is_configured": true, 00:12:35.662 "data_offset": 0, 00:12:35.662 "data_size": 65536 00:12:35.662 } 00:12:35.662 ] 00:12:35.662 }' 00:12:35.662 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:35.662 18:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.229 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:36.229 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:36.229 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:36.229 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:36.229 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:36.229 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:36.229 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:36.229 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:36.487 [2024-07-15 18:27:21.914428] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:36.487 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:36.487 "name": "Existed_Raid", 00:12:36.487 "aliases": [ 00:12:36.487 "cc72f8dc-9ca8-4ae8-a367-d99e6c17070d" 00:12:36.487 ], 00:12:36.487 "product_name": "Raid Volume", 00:12:36.487 "block_size": 512, 00:12:36.487 "num_blocks": 196608, 00:12:36.487 "uuid": "cc72f8dc-9ca8-4ae8-a367-d99e6c17070d", 00:12:36.487 "assigned_rate_limits": { 00:12:36.487 "rw_ios_per_sec": 0, 00:12:36.487 "rw_mbytes_per_sec": 0, 00:12:36.487 "r_mbytes_per_sec": 0, 00:12:36.487 "w_mbytes_per_sec": 0 00:12:36.487 }, 00:12:36.487 "claimed": false, 00:12:36.487 "zoned": false, 00:12:36.487 "supported_io_types": { 00:12:36.487 "read": true, 00:12:36.487 "write": true, 00:12:36.487 "unmap": true, 00:12:36.487 "flush": true, 00:12:36.487 "reset": true, 00:12:36.487 "nvme_admin": false, 00:12:36.487 "nvme_io": false, 00:12:36.487 "nvme_io_md": false, 00:12:36.487 "write_zeroes": true, 00:12:36.487 "zcopy": false, 00:12:36.487 "get_zone_info": false, 00:12:36.487 "zone_management": false, 00:12:36.487 "zone_append": false, 00:12:36.487 "compare": false, 00:12:36.487 "compare_and_write": false, 00:12:36.487 "abort": false, 00:12:36.487 "seek_hole": false, 00:12:36.487 "seek_data": false, 00:12:36.487 "copy": false, 00:12:36.487 "nvme_iov_md": false 00:12:36.487 }, 00:12:36.487 "memory_domains": [ 00:12:36.487 { 00:12:36.487 "dma_device_id": "system", 00:12:36.487 "dma_device_type": 1 00:12:36.487 }, 00:12:36.487 { 00:12:36.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.487 "dma_device_type": 2 00:12:36.487 }, 00:12:36.487 { 00:12:36.487 "dma_device_id": "system", 00:12:36.487 "dma_device_type": 1 00:12:36.487 }, 00:12:36.487 { 00:12:36.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.487 "dma_device_type": 2 00:12:36.487 }, 00:12:36.487 { 00:12:36.487 "dma_device_id": "system", 00:12:36.487 "dma_device_type": 1 00:12:36.487 }, 00:12:36.487 { 00:12:36.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.487 "dma_device_type": 2 00:12:36.487 } 00:12:36.487 ], 00:12:36.487 "driver_specific": { 00:12:36.487 "raid": { 00:12:36.487 "uuid": "cc72f8dc-9ca8-4ae8-a367-d99e6c17070d", 00:12:36.487 "strip_size_kb": 64, 00:12:36.487 "state": "online", 00:12:36.487 "raid_level": "raid0", 00:12:36.487 "superblock": false, 00:12:36.487 "num_base_bdevs": 3, 00:12:36.487 "num_base_bdevs_discovered": 3, 00:12:36.487 "num_base_bdevs_operational": 3, 00:12:36.488 "base_bdevs_list": [ 00:12:36.488 { 00:12:36.488 "name": "BaseBdev1", 00:12:36.488 "uuid": "83a5047a-4259-4732-99f2-92a06adbc677", 00:12:36.488 "is_configured": true, 00:12:36.488 "data_offset": 0, 00:12:36.488 "data_size": 65536 00:12:36.488 }, 00:12:36.488 { 00:12:36.488 "name": "BaseBdev2", 00:12:36.488 "uuid": "3f27c030-cfae-4028-93cd-472a6707c182", 00:12:36.488 "is_configured": true, 00:12:36.488 "data_offset": 0, 00:12:36.488 "data_size": 65536 00:12:36.488 }, 00:12:36.488 { 00:12:36.488 "name": "BaseBdev3", 00:12:36.488 "uuid": "68d66029-150a-45ed-ac7f-7b61feb0ced4", 00:12:36.488 "is_configured": true, 00:12:36.488 "data_offset": 0, 00:12:36.488 "data_size": 65536 00:12:36.488 } 00:12:36.488 ] 00:12:36.488 } 00:12:36.488 } 00:12:36.488 }' 00:12:36.488 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:36.488 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:36.488 BaseBdev2 00:12:36.488 BaseBdev3' 00:12:36.488 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:36.488 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:36.488 18:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:36.746 18:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:36.746 "name": "BaseBdev1", 00:12:36.746 "aliases": [ 00:12:36.746 "83a5047a-4259-4732-99f2-92a06adbc677" 00:12:36.746 ], 00:12:36.746 "product_name": "Malloc disk", 00:12:36.746 "block_size": 512, 00:12:36.747 "num_blocks": 65536, 00:12:36.747 "uuid": "83a5047a-4259-4732-99f2-92a06adbc677", 00:12:36.747 "assigned_rate_limits": { 00:12:36.747 "rw_ios_per_sec": 0, 00:12:36.747 "rw_mbytes_per_sec": 0, 00:12:36.747 "r_mbytes_per_sec": 0, 00:12:36.747 "w_mbytes_per_sec": 0 00:12:36.747 }, 00:12:36.747 "claimed": true, 00:12:36.747 "claim_type": "exclusive_write", 00:12:36.747 "zoned": false, 00:12:36.747 "supported_io_types": { 00:12:36.747 "read": true, 00:12:36.747 "write": true, 00:12:36.747 "unmap": true, 00:12:36.747 "flush": true, 00:12:36.747 "reset": true, 00:12:36.747 "nvme_admin": false, 00:12:36.747 "nvme_io": false, 00:12:36.747 "nvme_io_md": false, 00:12:36.747 "write_zeroes": true, 00:12:36.747 "zcopy": true, 00:12:36.747 "get_zone_info": false, 00:12:36.747 "zone_management": false, 00:12:36.747 "zone_append": false, 00:12:36.747 "compare": false, 00:12:36.747 "compare_and_write": false, 00:12:36.747 "abort": true, 00:12:36.747 "seek_hole": false, 00:12:36.747 "seek_data": false, 00:12:36.747 "copy": true, 00:12:36.747 "nvme_iov_md": false 00:12:36.747 }, 00:12:36.747 "memory_domains": [ 00:12:36.747 { 00:12:36.747 "dma_device_id": "system", 00:12:36.747 "dma_device_type": 1 00:12:36.747 }, 00:12:36.747 { 00:12:36.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.747 "dma_device_type": 2 00:12:36.747 } 00:12:36.747 ], 00:12:36.747 "driver_specific": {} 00:12:36.747 }' 00:12:36.747 18:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:36.747 18:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:37.005 18:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:37.006 18:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:37.006 18:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:37.006 18:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:37.006 18:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:37.006 18:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:37.006 18:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:37.006 18:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:37.265 18:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:37.265 18:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:37.265 18:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:37.265 18:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:37.265 18:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:37.524 18:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:37.524 "name": "BaseBdev2", 00:12:37.524 "aliases": [ 00:12:37.524 "3f27c030-cfae-4028-93cd-472a6707c182" 00:12:37.524 ], 00:12:37.524 "product_name": "Malloc disk", 00:12:37.524 "block_size": 512, 00:12:37.524 "num_blocks": 65536, 00:12:37.524 "uuid": "3f27c030-cfae-4028-93cd-472a6707c182", 00:12:37.524 "assigned_rate_limits": { 00:12:37.524 "rw_ios_per_sec": 0, 00:12:37.524 "rw_mbytes_per_sec": 0, 00:12:37.524 "r_mbytes_per_sec": 0, 00:12:37.524 "w_mbytes_per_sec": 0 00:12:37.524 }, 00:12:37.524 "claimed": true, 00:12:37.524 "claim_type": "exclusive_write", 00:12:37.524 "zoned": false, 00:12:37.524 "supported_io_types": { 00:12:37.524 "read": true, 00:12:37.524 "write": true, 00:12:37.524 "unmap": true, 00:12:37.524 "flush": true, 00:12:37.524 "reset": true, 00:12:37.524 "nvme_admin": false, 00:12:37.524 "nvme_io": false, 00:12:37.524 "nvme_io_md": false, 00:12:37.524 "write_zeroes": true, 00:12:37.524 "zcopy": true, 00:12:37.524 "get_zone_info": false, 00:12:37.524 "zone_management": false, 00:12:37.524 "zone_append": false, 00:12:37.524 "compare": false, 00:12:37.524 "compare_and_write": false, 00:12:37.524 "abort": true, 00:12:37.524 "seek_hole": false, 00:12:37.524 "seek_data": false, 00:12:37.524 "copy": true, 00:12:37.524 "nvme_iov_md": false 00:12:37.524 }, 00:12:37.524 "memory_domains": [ 00:12:37.524 { 00:12:37.524 "dma_device_id": "system", 00:12:37.524 "dma_device_type": 1 00:12:37.524 }, 00:12:37.524 { 00:12:37.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.524 "dma_device_type": 2 00:12:37.524 } 00:12:37.524 ], 00:12:37.524 "driver_specific": {} 00:12:37.524 }' 00:12:37.524 18:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:37.524 18:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:37.524 18:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:37.524 18:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:37.524 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:37.524 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:37.524 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:37.783 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:37.783 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:37.783 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:37.783 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:37.783 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:37.783 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:37.783 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:37.783 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:38.041 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:38.041 "name": "BaseBdev3", 00:12:38.041 "aliases": [ 00:12:38.041 "68d66029-150a-45ed-ac7f-7b61feb0ced4" 00:12:38.041 ], 00:12:38.041 "product_name": "Malloc disk", 00:12:38.041 "block_size": 512, 00:12:38.041 "num_blocks": 65536, 00:12:38.041 "uuid": "68d66029-150a-45ed-ac7f-7b61feb0ced4", 00:12:38.041 "assigned_rate_limits": { 00:12:38.041 "rw_ios_per_sec": 0, 00:12:38.041 "rw_mbytes_per_sec": 0, 00:12:38.041 "r_mbytes_per_sec": 0, 00:12:38.041 "w_mbytes_per_sec": 0 00:12:38.041 }, 00:12:38.041 "claimed": true, 00:12:38.041 "claim_type": "exclusive_write", 00:12:38.041 "zoned": false, 00:12:38.041 "supported_io_types": { 00:12:38.041 "read": true, 00:12:38.041 "write": true, 00:12:38.041 "unmap": true, 00:12:38.041 "flush": true, 00:12:38.041 "reset": true, 00:12:38.041 "nvme_admin": false, 00:12:38.041 "nvme_io": false, 00:12:38.041 "nvme_io_md": false, 00:12:38.041 "write_zeroes": true, 00:12:38.041 "zcopy": true, 00:12:38.041 "get_zone_info": false, 00:12:38.041 "zone_management": false, 00:12:38.041 "zone_append": false, 00:12:38.041 "compare": false, 00:12:38.041 "compare_and_write": false, 00:12:38.041 "abort": true, 00:12:38.041 "seek_hole": false, 00:12:38.041 "seek_data": false, 00:12:38.041 "copy": true, 00:12:38.041 "nvme_iov_md": false 00:12:38.041 }, 00:12:38.041 "memory_domains": [ 00:12:38.041 { 00:12:38.041 "dma_device_id": "system", 00:12:38.041 "dma_device_type": 1 00:12:38.041 }, 00:12:38.041 { 00:12:38.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.041 "dma_device_type": 2 00:12:38.041 } 00:12:38.041 ], 00:12:38.041 "driver_specific": {} 00:12:38.041 }' 00:12:38.041 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:38.041 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:38.299 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:38.299 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:38.299 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:38.299 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:38.299 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:38.299 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:38.299 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:38.299 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:38.299 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:38.299 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:38.299 18:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:38.558 [2024-07-15 18:27:24.003794] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:38.558 [2024-07-15 18:27:24.003822] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:38.558 [2024-07-15 18:27:24.003862] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:38.558 18:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:38.558 18:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:12:38.558 18:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:38.558 18:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:38.558 18:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:12:38.558 18:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:12:38.558 18:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:38.558 18:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:12:38.558 18:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:38.558 18:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:38.558 18:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:38.558 18:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:38.558 18:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:38.558 18:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:38.558 18:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:38.558 18:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:38.558 18:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.818 18:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:38.818 "name": "Existed_Raid", 00:12:38.818 "uuid": "cc72f8dc-9ca8-4ae8-a367-d99e6c17070d", 00:12:38.818 "strip_size_kb": 64, 00:12:38.818 "state": "offline", 00:12:38.818 "raid_level": "raid0", 00:12:38.818 "superblock": false, 00:12:38.818 "num_base_bdevs": 3, 00:12:38.818 "num_base_bdevs_discovered": 2, 00:12:38.818 "num_base_bdevs_operational": 2, 00:12:38.818 "base_bdevs_list": [ 00:12:38.818 { 00:12:38.818 "name": null, 00:12:38.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.818 "is_configured": false, 00:12:38.818 "data_offset": 0, 00:12:38.818 "data_size": 65536 00:12:38.818 }, 00:12:38.818 { 00:12:38.818 "name": "BaseBdev2", 00:12:38.818 "uuid": "3f27c030-cfae-4028-93cd-472a6707c182", 00:12:38.818 "is_configured": true, 00:12:38.818 "data_offset": 0, 00:12:38.818 "data_size": 65536 00:12:38.818 }, 00:12:38.818 { 00:12:38.818 "name": "BaseBdev3", 00:12:38.818 "uuid": "68d66029-150a-45ed-ac7f-7b61feb0ced4", 00:12:38.818 "is_configured": true, 00:12:38.818 "data_offset": 0, 00:12:38.818 "data_size": 65536 00:12:38.818 } 00:12:38.818 ] 00:12:38.818 }' 00:12:38.818 18:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:38.818 18:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.385 18:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:39.385 18:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:39.385 18:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:39.385 18:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:39.644 18:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:39.644 18:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:39.644 18:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:39.903 [2024-07-15 18:27:25.396666] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:39.903 18:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:39.903 18:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:39.903 18:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:39.903 18:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:40.162 18:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:40.162 18:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:40.162 18:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:40.421 [2024-07-15 18:27:25.916773] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:40.421 [2024-07-15 18:27:25.916816] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x969360 name Existed_Raid, state offline 00:12:40.421 18:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:40.421 18:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:40.421 18:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:40.421 18:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:40.680 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:40.680 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:40.680 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:12:40.680 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:12:40.680 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:40.680 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:40.939 BaseBdev2 00:12:40.939 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:12:40.939 18:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:40.939 18:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:40.939 18:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:40.939 18:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:40.939 18:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:40.939 18:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:41.197 18:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:41.456 [ 00:12:41.456 { 00:12:41.456 "name": "BaseBdev2", 00:12:41.456 "aliases": [ 00:12:41.456 "f04ee6b3-fd79-4023-82d7-5f51ed7736b6" 00:12:41.456 ], 00:12:41.456 "product_name": "Malloc disk", 00:12:41.456 "block_size": 512, 00:12:41.456 "num_blocks": 65536, 00:12:41.456 "uuid": "f04ee6b3-fd79-4023-82d7-5f51ed7736b6", 00:12:41.456 "assigned_rate_limits": { 00:12:41.456 "rw_ios_per_sec": 0, 00:12:41.456 "rw_mbytes_per_sec": 0, 00:12:41.456 "r_mbytes_per_sec": 0, 00:12:41.456 "w_mbytes_per_sec": 0 00:12:41.456 }, 00:12:41.456 "claimed": false, 00:12:41.456 "zoned": false, 00:12:41.456 "supported_io_types": { 00:12:41.456 "read": true, 00:12:41.456 "write": true, 00:12:41.456 "unmap": true, 00:12:41.456 "flush": true, 00:12:41.456 "reset": true, 00:12:41.456 "nvme_admin": false, 00:12:41.456 "nvme_io": false, 00:12:41.456 "nvme_io_md": false, 00:12:41.456 "write_zeroes": true, 00:12:41.456 "zcopy": true, 00:12:41.456 "get_zone_info": false, 00:12:41.456 "zone_management": false, 00:12:41.456 "zone_append": false, 00:12:41.456 "compare": false, 00:12:41.456 "compare_and_write": false, 00:12:41.456 "abort": true, 00:12:41.456 "seek_hole": false, 00:12:41.456 "seek_data": false, 00:12:41.456 "copy": true, 00:12:41.456 "nvme_iov_md": false 00:12:41.456 }, 00:12:41.456 "memory_domains": [ 00:12:41.456 { 00:12:41.456 "dma_device_id": "system", 00:12:41.456 "dma_device_type": 1 00:12:41.456 }, 00:12:41.456 { 00:12:41.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.457 "dma_device_type": 2 00:12:41.457 } 00:12:41.457 ], 00:12:41.457 "driver_specific": {} 00:12:41.457 } 00:12:41.457 ] 00:12:41.457 18:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:41.457 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:41.457 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:41.457 18:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:41.716 BaseBdev3 00:12:41.716 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:12:41.716 18:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:12:41.716 18:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:41.716 18:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:41.716 18:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:41.716 18:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:41.716 18:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:41.976 18:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:42.235 [ 00:12:42.235 { 00:12:42.235 "name": "BaseBdev3", 00:12:42.236 "aliases": [ 00:12:42.236 "98ddebe7-19bd-4b53-8e37-8788e0592e94" 00:12:42.236 ], 00:12:42.236 "product_name": "Malloc disk", 00:12:42.236 "block_size": 512, 00:12:42.236 "num_blocks": 65536, 00:12:42.236 "uuid": "98ddebe7-19bd-4b53-8e37-8788e0592e94", 00:12:42.236 "assigned_rate_limits": { 00:12:42.236 "rw_ios_per_sec": 0, 00:12:42.236 "rw_mbytes_per_sec": 0, 00:12:42.236 "r_mbytes_per_sec": 0, 00:12:42.236 "w_mbytes_per_sec": 0 00:12:42.236 }, 00:12:42.236 "claimed": false, 00:12:42.236 "zoned": false, 00:12:42.236 "supported_io_types": { 00:12:42.236 "read": true, 00:12:42.236 "write": true, 00:12:42.236 "unmap": true, 00:12:42.236 "flush": true, 00:12:42.236 "reset": true, 00:12:42.236 "nvme_admin": false, 00:12:42.236 "nvme_io": false, 00:12:42.236 "nvme_io_md": false, 00:12:42.236 "write_zeroes": true, 00:12:42.236 "zcopy": true, 00:12:42.236 "get_zone_info": false, 00:12:42.236 "zone_management": false, 00:12:42.236 "zone_append": false, 00:12:42.236 "compare": false, 00:12:42.236 "compare_and_write": false, 00:12:42.236 "abort": true, 00:12:42.236 "seek_hole": false, 00:12:42.236 "seek_data": false, 00:12:42.236 "copy": true, 00:12:42.236 "nvme_iov_md": false 00:12:42.236 }, 00:12:42.236 "memory_domains": [ 00:12:42.236 { 00:12:42.236 "dma_device_id": "system", 00:12:42.236 "dma_device_type": 1 00:12:42.236 }, 00:12:42.236 { 00:12:42.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.236 "dma_device_type": 2 00:12:42.236 } 00:12:42.236 ], 00:12:42.236 "driver_specific": {} 00:12:42.236 } 00:12:42.236 ] 00:12:42.236 18:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:42.236 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:42.236 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:42.236 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:42.496 [2024-07-15 18:27:27.965692] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:42.496 [2024-07-15 18:27:27.965731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:42.496 [2024-07-15 18:27:27.965749] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:42.496 [2024-07-15 18:27:27.967147] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:42.496 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:42.496 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:42.496 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:42.496 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:42.496 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:42.496 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:42.496 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:42.496 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:42.496 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:42.496 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:42.496 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:42.496 18:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.755 18:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:42.755 "name": "Existed_Raid", 00:12:42.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.755 "strip_size_kb": 64, 00:12:42.755 "state": "configuring", 00:12:42.755 "raid_level": "raid0", 00:12:42.755 "superblock": false, 00:12:42.755 "num_base_bdevs": 3, 00:12:42.755 "num_base_bdevs_discovered": 2, 00:12:42.755 "num_base_bdevs_operational": 3, 00:12:42.755 "base_bdevs_list": [ 00:12:42.755 { 00:12:42.755 "name": "BaseBdev1", 00:12:42.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.755 "is_configured": false, 00:12:42.755 "data_offset": 0, 00:12:42.755 "data_size": 0 00:12:42.755 }, 00:12:42.755 { 00:12:42.755 "name": "BaseBdev2", 00:12:42.755 "uuid": "f04ee6b3-fd79-4023-82d7-5f51ed7736b6", 00:12:42.755 "is_configured": true, 00:12:42.755 "data_offset": 0, 00:12:42.755 "data_size": 65536 00:12:42.755 }, 00:12:42.755 { 00:12:42.755 "name": "BaseBdev3", 00:12:42.755 "uuid": "98ddebe7-19bd-4b53-8e37-8788e0592e94", 00:12:42.755 "is_configured": true, 00:12:42.755 "data_offset": 0, 00:12:42.755 "data_size": 65536 00:12:42.755 } 00:12:42.755 ] 00:12:42.755 }' 00:12:42.755 18:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:42.755 18:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.323 18:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:12:43.582 [2024-07-15 18:27:29.084689] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:43.582 18:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:43.582 18:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:43.582 18:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:43.582 18:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:43.582 18:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:43.582 18:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:43.582 18:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:43.582 18:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:43.582 18:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:43.582 18:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:43.582 18:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:43.582 18:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.840 18:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:43.840 "name": "Existed_Raid", 00:12:43.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.840 "strip_size_kb": 64, 00:12:43.840 "state": "configuring", 00:12:43.840 "raid_level": "raid0", 00:12:43.840 "superblock": false, 00:12:43.840 "num_base_bdevs": 3, 00:12:43.840 "num_base_bdevs_discovered": 1, 00:12:43.840 "num_base_bdevs_operational": 3, 00:12:43.840 "base_bdevs_list": [ 00:12:43.840 { 00:12:43.840 "name": "BaseBdev1", 00:12:43.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.840 "is_configured": false, 00:12:43.840 "data_offset": 0, 00:12:43.841 "data_size": 0 00:12:43.841 }, 00:12:43.841 { 00:12:43.841 "name": null, 00:12:43.841 "uuid": "f04ee6b3-fd79-4023-82d7-5f51ed7736b6", 00:12:43.841 "is_configured": false, 00:12:43.841 "data_offset": 0, 00:12:43.841 "data_size": 65536 00:12:43.841 }, 00:12:43.841 { 00:12:43.841 "name": "BaseBdev3", 00:12:43.841 "uuid": "98ddebe7-19bd-4b53-8e37-8788e0592e94", 00:12:43.841 "is_configured": true, 00:12:43.841 "data_offset": 0, 00:12:43.841 "data_size": 65536 00:12:43.841 } 00:12:43.841 ] 00:12:43.841 }' 00:12:43.841 18:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:43.841 18:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.776 18:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:44.776 18:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:44.776 18:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:12:44.776 18:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:45.061 [2024-07-15 18:27:30.407531] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:45.061 BaseBdev1 00:12:45.061 18:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:12:45.061 18:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:45.061 18:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:45.061 18:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:45.061 18:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:45.061 18:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:45.061 18:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:45.061 18:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:45.319 [ 00:12:45.319 { 00:12:45.319 "name": "BaseBdev1", 00:12:45.319 "aliases": [ 00:12:45.319 "773554da-8a52-46c0-baaa-6064339ea410" 00:12:45.319 ], 00:12:45.319 "product_name": "Malloc disk", 00:12:45.319 "block_size": 512, 00:12:45.319 "num_blocks": 65536, 00:12:45.319 "uuid": "773554da-8a52-46c0-baaa-6064339ea410", 00:12:45.319 "assigned_rate_limits": { 00:12:45.319 "rw_ios_per_sec": 0, 00:12:45.319 "rw_mbytes_per_sec": 0, 00:12:45.319 "r_mbytes_per_sec": 0, 00:12:45.319 "w_mbytes_per_sec": 0 00:12:45.319 }, 00:12:45.319 "claimed": true, 00:12:45.319 "claim_type": "exclusive_write", 00:12:45.319 "zoned": false, 00:12:45.320 "supported_io_types": { 00:12:45.320 "read": true, 00:12:45.320 "write": true, 00:12:45.320 "unmap": true, 00:12:45.320 "flush": true, 00:12:45.320 "reset": true, 00:12:45.320 "nvme_admin": false, 00:12:45.320 "nvme_io": false, 00:12:45.320 "nvme_io_md": false, 00:12:45.320 "write_zeroes": true, 00:12:45.320 "zcopy": true, 00:12:45.320 "get_zone_info": false, 00:12:45.320 "zone_management": false, 00:12:45.320 "zone_append": false, 00:12:45.320 "compare": false, 00:12:45.320 "compare_and_write": false, 00:12:45.320 "abort": true, 00:12:45.320 "seek_hole": false, 00:12:45.320 "seek_data": false, 00:12:45.320 "copy": true, 00:12:45.320 "nvme_iov_md": false 00:12:45.320 }, 00:12:45.320 "memory_domains": [ 00:12:45.320 { 00:12:45.320 "dma_device_id": "system", 00:12:45.320 "dma_device_type": 1 00:12:45.320 }, 00:12:45.320 { 00:12:45.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.320 "dma_device_type": 2 00:12:45.320 } 00:12:45.320 ], 00:12:45.320 "driver_specific": {} 00:12:45.320 } 00:12:45.320 ] 00:12:45.320 18:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:45.320 18:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:45.320 18:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:45.320 18:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:45.320 18:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:45.320 18:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:45.320 18:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:45.320 18:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:45.320 18:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:45.320 18:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:45.320 18:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:45.320 18:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:45.320 18:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.578 18:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:45.578 "name": "Existed_Raid", 00:12:45.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.578 "strip_size_kb": 64, 00:12:45.578 "state": "configuring", 00:12:45.578 "raid_level": "raid0", 00:12:45.578 "superblock": false, 00:12:45.578 "num_base_bdevs": 3, 00:12:45.578 "num_base_bdevs_discovered": 2, 00:12:45.578 "num_base_bdevs_operational": 3, 00:12:45.578 "base_bdevs_list": [ 00:12:45.578 { 00:12:45.578 "name": "BaseBdev1", 00:12:45.578 "uuid": "773554da-8a52-46c0-baaa-6064339ea410", 00:12:45.578 "is_configured": true, 00:12:45.578 "data_offset": 0, 00:12:45.578 "data_size": 65536 00:12:45.578 }, 00:12:45.578 { 00:12:45.578 "name": null, 00:12:45.578 "uuid": "f04ee6b3-fd79-4023-82d7-5f51ed7736b6", 00:12:45.578 "is_configured": false, 00:12:45.578 "data_offset": 0, 00:12:45.578 "data_size": 65536 00:12:45.578 }, 00:12:45.578 { 00:12:45.578 "name": "BaseBdev3", 00:12:45.578 "uuid": "98ddebe7-19bd-4b53-8e37-8788e0592e94", 00:12:45.578 "is_configured": true, 00:12:45.578 "data_offset": 0, 00:12:45.578 "data_size": 65536 00:12:45.578 } 00:12:45.578 ] 00:12:45.578 }' 00:12:45.578 18:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:45.578 18:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.145 18:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:46.145 18:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:46.403 18:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:12:46.403 18:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:12:46.661 [2024-07-15 18:27:32.164297] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:46.661 18:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:46.661 18:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:46.661 18:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:46.661 18:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:46.661 18:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:46.661 18:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:46.661 18:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:46.661 18:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:46.661 18:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:46.661 18:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:46.661 18:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:46.661 18:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.920 18:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:46.920 "name": "Existed_Raid", 00:12:46.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.920 "strip_size_kb": 64, 00:12:46.920 "state": "configuring", 00:12:46.920 "raid_level": "raid0", 00:12:46.920 "superblock": false, 00:12:46.920 "num_base_bdevs": 3, 00:12:46.920 "num_base_bdevs_discovered": 1, 00:12:46.920 "num_base_bdevs_operational": 3, 00:12:46.920 "base_bdevs_list": [ 00:12:46.920 { 00:12:46.920 "name": "BaseBdev1", 00:12:46.920 "uuid": "773554da-8a52-46c0-baaa-6064339ea410", 00:12:46.920 "is_configured": true, 00:12:46.920 "data_offset": 0, 00:12:46.920 "data_size": 65536 00:12:46.920 }, 00:12:46.920 { 00:12:46.920 "name": null, 00:12:46.920 "uuid": "f04ee6b3-fd79-4023-82d7-5f51ed7736b6", 00:12:46.920 "is_configured": false, 00:12:46.920 "data_offset": 0, 00:12:46.920 "data_size": 65536 00:12:46.920 }, 00:12:46.920 { 00:12:46.920 "name": null, 00:12:46.920 "uuid": "98ddebe7-19bd-4b53-8e37-8788e0592e94", 00:12:46.920 "is_configured": false, 00:12:46.920 "data_offset": 0, 00:12:46.920 "data_size": 65536 00:12:46.920 } 00:12:46.920 ] 00:12:46.920 }' 00:12:46.920 18:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:46.920 18:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.852 18:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:47.852 18:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:47.852 18:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:12:47.852 18:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:48.110 [2024-07-15 18:27:33.548076] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:48.110 18:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:48.110 18:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:48.110 18:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:48.110 18:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:48.110 18:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:48.110 18:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:48.110 18:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:48.110 18:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:48.110 18:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:48.110 18:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:48.110 18:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:48.110 18:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.367 18:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:48.367 "name": "Existed_Raid", 00:12:48.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.367 "strip_size_kb": 64, 00:12:48.367 "state": "configuring", 00:12:48.367 "raid_level": "raid0", 00:12:48.367 "superblock": false, 00:12:48.367 "num_base_bdevs": 3, 00:12:48.367 "num_base_bdevs_discovered": 2, 00:12:48.367 "num_base_bdevs_operational": 3, 00:12:48.367 "base_bdevs_list": [ 00:12:48.367 { 00:12:48.367 "name": "BaseBdev1", 00:12:48.367 "uuid": "773554da-8a52-46c0-baaa-6064339ea410", 00:12:48.367 "is_configured": true, 00:12:48.367 "data_offset": 0, 00:12:48.367 "data_size": 65536 00:12:48.367 }, 00:12:48.367 { 00:12:48.367 "name": null, 00:12:48.367 "uuid": "f04ee6b3-fd79-4023-82d7-5f51ed7736b6", 00:12:48.367 "is_configured": false, 00:12:48.367 "data_offset": 0, 00:12:48.367 "data_size": 65536 00:12:48.367 }, 00:12:48.367 { 00:12:48.367 "name": "BaseBdev3", 00:12:48.367 "uuid": "98ddebe7-19bd-4b53-8e37-8788e0592e94", 00:12:48.367 "is_configured": true, 00:12:48.367 "data_offset": 0, 00:12:48.367 "data_size": 65536 00:12:48.367 } 00:12:48.367 ] 00:12:48.367 }' 00:12:48.367 18:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:48.367 18:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.931 18:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:48.931 18:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:49.189 18:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:12:49.189 18:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:49.447 [2024-07-15 18:27:34.839561] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:49.447 18:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:49.447 18:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:49.447 18:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:49.447 18:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:49.447 18:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:49.447 18:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:49.447 18:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:49.447 18:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:49.447 18:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:49.447 18:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:49.447 18:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:49.447 18:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.706 18:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:49.706 "name": "Existed_Raid", 00:12:49.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.706 "strip_size_kb": 64, 00:12:49.706 "state": "configuring", 00:12:49.706 "raid_level": "raid0", 00:12:49.706 "superblock": false, 00:12:49.706 "num_base_bdevs": 3, 00:12:49.706 "num_base_bdevs_discovered": 1, 00:12:49.706 "num_base_bdevs_operational": 3, 00:12:49.706 "base_bdevs_list": [ 00:12:49.706 { 00:12:49.706 "name": null, 00:12:49.706 "uuid": "773554da-8a52-46c0-baaa-6064339ea410", 00:12:49.706 "is_configured": false, 00:12:49.706 "data_offset": 0, 00:12:49.706 "data_size": 65536 00:12:49.706 }, 00:12:49.706 { 00:12:49.706 "name": null, 00:12:49.706 "uuid": "f04ee6b3-fd79-4023-82d7-5f51ed7736b6", 00:12:49.706 "is_configured": false, 00:12:49.706 "data_offset": 0, 00:12:49.706 "data_size": 65536 00:12:49.706 }, 00:12:49.706 { 00:12:49.706 "name": "BaseBdev3", 00:12:49.706 "uuid": "98ddebe7-19bd-4b53-8e37-8788e0592e94", 00:12:49.706 "is_configured": true, 00:12:49.706 "data_offset": 0, 00:12:49.706 "data_size": 65536 00:12:49.706 } 00:12:49.706 ] 00:12:49.706 }' 00:12:49.706 18:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:49.706 18:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.274 18:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:50.274 18:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:50.533 18:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:12:50.533 18:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:50.826 [2024-07-15 18:27:36.101433] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:50.826 18:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:50.826 18:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:50.826 18:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:50.826 18:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:50.826 18:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:50.826 18:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:50.826 18:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:50.826 18:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:50.826 18:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:50.826 18:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:50.826 18:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:50.826 18:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.085 18:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:51.085 "name": "Existed_Raid", 00:12:51.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.085 "strip_size_kb": 64, 00:12:51.085 "state": "configuring", 00:12:51.085 "raid_level": "raid0", 00:12:51.085 "superblock": false, 00:12:51.085 "num_base_bdevs": 3, 00:12:51.085 "num_base_bdevs_discovered": 2, 00:12:51.085 "num_base_bdevs_operational": 3, 00:12:51.085 "base_bdevs_list": [ 00:12:51.085 { 00:12:51.085 "name": null, 00:12:51.085 "uuid": "773554da-8a52-46c0-baaa-6064339ea410", 00:12:51.085 "is_configured": false, 00:12:51.085 "data_offset": 0, 00:12:51.085 "data_size": 65536 00:12:51.085 }, 00:12:51.085 { 00:12:51.085 "name": "BaseBdev2", 00:12:51.085 "uuid": "f04ee6b3-fd79-4023-82d7-5f51ed7736b6", 00:12:51.085 "is_configured": true, 00:12:51.085 "data_offset": 0, 00:12:51.085 "data_size": 65536 00:12:51.085 }, 00:12:51.085 { 00:12:51.085 "name": "BaseBdev3", 00:12:51.085 "uuid": "98ddebe7-19bd-4b53-8e37-8788e0592e94", 00:12:51.085 "is_configured": true, 00:12:51.085 "data_offset": 0, 00:12:51.085 "data_size": 65536 00:12:51.085 } 00:12:51.085 ] 00:12:51.085 }' 00:12:51.085 18:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:51.085 18:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.652 18:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:51.652 18:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:51.652 18:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:12:51.652 18:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:51.652 18:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:51.911 18:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 773554da-8a52-46c0-baaa-6064339ea410 00:12:52.170 [2024-07-15 18:27:37.681051] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:52.170 [2024-07-15 18:27:37.681090] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xb0da90 00:12:52.170 [2024-07-15 18:27:37.681104] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:52.170 [2024-07-15 18:27:37.681304] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9552d0 00:12:52.170 [2024-07-15 18:27:37.681433] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb0da90 00:12:52.170 [2024-07-15 18:27:37.681441] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xb0da90 00:12:52.170 [2024-07-15 18:27:37.681608] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.170 NewBaseBdev 00:12:52.170 18:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:12:52.170 18:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:12:52.170 18:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:52.170 18:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:52.170 18:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:52.170 18:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:52.170 18:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:52.429 18:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:52.688 [ 00:12:52.688 { 00:12:52.688 "name": "NewBaseBdev", 00:12:52.688 "aliases": [ 00:12:52.688 "773554da-8a52-46c0-baaa-6064339ea410" 00:12:52.688 ], 00:12:52.688 "product_name": "Malloc disk", 00:12:52.688 "block_size": 512, 00:12:52.688 "num_blocks": 65536, 00:12:52.688 "uuid": "773554da-8a52-46c0-baaa-6064339ea410", 00:12:52.688 "assigned_rate_limits": { 00:12:52.688 "rw_ios_per_sec": 0, 00:12:52.688 "rw_mbytes_per_sec": 0, 00:12:52.688 "r_mbytes_per_sec": 0, 00:12:52.688 "w_mbytes_per_sec": 0 00:12:52.688 }, 00:12:52.688 "claimed": true, 00:12:52.688 "claim_type": "exclusive_write", 00:12:52.688 "zoned": false, 00:12:52.688 "supported_io_types": { 00:12:52.688 "read": true, 00:12:52.688 "write": true, 00:12:52.688 "unmap": true, 00:12:52.688 "flush": true, 00:12:52.688 "reset": true, 00:12:52.688 "nvme_admin": false, 00:12:52.688 "nvme_io": false, 00:12:52.688 "nvme_io_md": false, 00:12:52.688 "write_zeroes": true, 00:12:52.688 "zcopy": true, 00:12:52.688 "get_zone_info": false, 00:12:52.688 "zone_management": false, 00:12:52.688 "zone_append": false, 00:12:52.688 "compare": false, 00:12:52.688 "compare_and_write": false, 00:12:52.688 "abort": true, 00:12:52.688 "seek_hole": false, 00:12:52.688 "seek_data": false, 00:12:52.688 "copy": true, 00:12:52.688 "nvme_iov_md": false 00:12:52.688 }, 00:12:52.688 "memory_domains": [ 00:12:52.688 { 00:12:52.688 "dma_device_id": "system", 00:12:52.688 "dma_device_type": 1 00:12:52.688 }, 00:12:52.688 { 00:12:52.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.688 "dma_device_type": 2 00:12:52.688 } 00:12:52.688 ], 00:12:52.688 "driver_specific": {} 00:12:52.688 } 00:12:52.688 ] 00:12:52.688 18:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:52.688 18:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:52.688 18:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:52.688 18:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:52.688 18:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:52.688 18:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:52.688 18:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:52.688 18:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:52.688 18:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:52.688 18:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:52.688 18:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:52.688 18:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:52.688 18:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.947 18:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:52.947 "name": "Existed_Raid", 00:12:52.947 "uuid": "e68fc386-b3bc-43fa-b408-177bf30454af", 00:12:52.947 "strip_size_kb": 64, 00:12:52.947 "state": "online", 00:12:52.947 "raid_level": "raid0", 00:12:52.947 "superblock": false, 00:12:52.947 "num_base_bdevs": 3, 00:12:52.947 "num_base_bdevs_discovered": 3, 00:12:52.947 "num_base_bdevs_operational": 3, 00:12:52.947 "base_bdevs_list": [ 00:12:52.947 { 00:12:52.947 "name": "NewBaseBdev", 00:12:52.947 "uuid": "773554da-8a52-46c0-baaa-6064339ea410", 00:12:52.947 "is_configured": true, 00:12:52.947 "data_offset": 0, 00:12:52.947 "data_size": 65536 00:12:52.947 }, 00:12:52.947 { 00:12:52.947 "name": "BaseBdev2", 00:12:52.947 "uuid": "f04ee6b3-fd79-4023-82d7-5f51ed7736b6", 00:12:52.947 "is_configured": true, 00:12:52.947 "data_offset": 0, 00:12:52.947 "data_size": 65536 00:12:52.947 }, 00:12:52.947 { 00:12:52.947 "name": "BaseBdev3", 00:12:52.947 "uuid": "98ddebe7-19bd-4b53-8e37-8788e0592e94", 00:12:52.947 "is_configured": true, 00:12:52.947 "data_offset": 0, 00:12:52.947 "data_size": 65536 00:12:52.947 } 00:12:52.947 ] 00:12:52.947 }' 00:12:52.947 18:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:52.947 18:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.885 18:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:12:53.885 18:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:53.885 18:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:53.885 18:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:53.885 18:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:53.885 18:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:53.885 18:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:53.885 18:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:53.885 [2024-07-15 18:27:39.257622] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:53.885 18:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:53.885 "name": "Existed_Raid", 00:12:53.885 "aliases": [ 00:12:53.885 "e68fc386-b3bc-43fa-b408-177bf30454af" 00:12:53.885 ], 00:12:53.885 "product_name": "Raid Volume", 00:12:53.885 "block_size": 512, 00:12:53.885 "num_blocks": 196608, 00:12:53.885 "uuid": "e68fc386-b3bc-43fa-b408-177bf30454af", 00:12:53.885 "assigned_rate_limits": { 00:12:53.885 "rw_ios_per_sec": 0, 00:12:53.885 "rw_mbytes_per_sec": 0, 00:12:53.885 "r_mbytes_per_sec": 0, 00:12:53.885 "w_mbytes_per_sec": 0 00:12:53.885 }, 00:12:53.885 "claimed": false, 00:12:53.885 "zoned": false, 00:12:53.885 "supported_io_types": { 00:12:53.885 "read": true, 00:12:53.885 "write": true, 00:12:53.885 "unmap": true, 00:12:53.885 "flush": true, 00:12:53.885 "reset": true, 00:12:53.885 "nvme_admin": false, 00:12:53.885 "nvme_io": false, 00:12:53.885 "nvme_io_md": false, 00:12:53.885 "write_zeroes": true, 00:12:53.885 "zcopy": false, 00:12:53.885 "get_zone_info": false, 00:12:53.885 "zone_management": false, 00:12:53.885 "zone_append": false, 00:12:53.885 "compare": false, 00:12:53.885 "compare_and_write": false, 00:12:53.885 "abort": false, 00:12:53.885 "seek_hole": false, 00:12:53.885 "seek_data": false, 00:12:53.885 "copy": false, 00:12:53.885 "nvme_iov_md": false 00:12:53.885 }, 00:12:53.885 "memory_domains": [ 00:12:53.885 { 00:12:53.885 "dma_device_id": "system", 00:12:53.885 "dma_device_type": 1 00:12:53.885 }, 00:12:53.885 { 00:12:53.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.885 "dma_device_type": 2 00:12:53.885 }, 00:12:53.885 { 00:12:53.885 "dma_device_id": "system", 00:12:53.885 "dma_device_type": 1 00:12:53.885 }, 00:12:53.885 { 00:12:53.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.885 "dma_device_type": 2 00:12:53.885 }, 00:12:53.885 { 00:12:53.885 "dma_device_id": "system", 00:12:53.885 "dma_device_type": 1 00:12:53.885 }, 00:12:53.885 { 00:12:53.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.885 "dma_device_type": 2 00:12:53.885 } 00:12:53.885 ], 00:12:53.885 "driver_specific": { 00:12:53.885 "raid": { 00:12:53.885 "uuid": "e68fc386-b3bc-43fa-b408-177bf30454af", 00:12:53.885 "strip_size_kb": 64, 00:12:53.885 "state": "online", 00:12:53.885 "raid_level": "raid0", 00:12:53.885 "superblock": false, 00:12:53.885 "num_base_bdevs": 3, 00:12:53.885 "num_base_bdevs_discovered": 3, 00:12:53.885 "num_base_bdevs_operational": 3, 00:12:53.885 "base_bdevs_list": [ 00:12:53.885 { 00:12:53.885 "name": "NewBaseBdev", 00:12:53.885 "uuid": "773554da-8a52-46c0-baaa-6064339ea410", 00:12:53.885 "is_configured": true, 00:12:53.885 "data_offset": 0, 00:12:53.885 "data_size": 65536 00:12:53.885 }, 00:12:53.885 { 00:12:53.885 "name": "BaseBdev2", 00:12:53.885 "uuid": "f04ee6b3-fd79-4023-82d7-5f51ed7736b6", 00:12:53.885 "is_configured": true, 00:12:53.885 "data_offset": 0, 00:12:53.885 "data_size": 65536 00:12:53.885 }, 00:12:53.885 { 00:12:53.885 "name": "BaseBdev3", 00:12:53.885 "uuid": "98ddebe7-19bd-4b53-8e37-8788e0592e94", 00:12:53.885 "is_configured": true, 00:12:53.885 "data_offset": 0, 00:12:53.885 "data_size": 65536 00:12:53.885 } 00:12:53.885 ] 00:12:53.885 } 00:12:53.885 } 00:12:53.885 }' 00:12:53.885 18:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:53.885 18:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:12:53.885 BaseBdev2 00:12:53.885 BaseBdev3' 00:12:53.885 18:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:53.885 18:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:12:53.885 18:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:54.145 18:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:54.145 "name": "NewBaseBdev", 00:12:54.145 "aliases": [ 00:12:54.145 "773554da-8a52-46c0-baaa-6064339ea410" 00:12:54.145 ], 00:12:54.145 "product_name": "Malloc disk", 00:12:54.145 "block_size": 512, 00:12:54.145 "num_blocks": 65536, 00:12:54.145 "uuid": "773554da-8a52-46c0-baaa-6064339ea410", 00:12:54.145 "assigned_rate_limits": { 00:12:54.145 "rw_ios_per_sec": 0, 00:12:54.145 "rw_mbytes_per_sec": 0, 00:12:54.145 "r_mbytes_per_sec": 0, 00:12:54.145 "w_mbytes_per_sec": 0 00:12:54.145 }, 00:12:54.145 "claimed": true, 00:12:54.145 "claim_type": "exclusive_write", 00:12:54.145 "zoned": false, 00:12:54.145 "supported_io_types": { 00:12:54.145 "read": true, 00:12:54.145 "write": true, 00:12:54.145 "unmap": true, 00:12:54.145 "flush": true, 00:12:54.145 "reset": true, 00:12:54.145 "nvme_admin": false, 00:12:54.145 "nvme_io": false, 00:12:54.145 "nvme_io_md": false, 00:12:54.145 "write_zeroes": true, 00:12:54.145 "zcopy": true, 00:12:54.145 "get_zone_info": false, 00:12:54.145 "zone_management": false, 00:12:54.145 "zone_append": false, 00:12:54.145 "compare": false, 00:12:54.145 "compare_and_write": false, 00:12:54.145 "abort": true, 00:12:54.145 "seek_hole": false, 00:12:54.145 "seek_data": false, 00:12:54.145 "copy": true, 00:12:54.145 "nvme_iov_md": false 00:12:54.145 }, 00:12:54.145 "memory_domains": [ 00:12:54.145 { 00:12:54.145 "dma_device_id": "system", 00:12:54.145 "dma_device_type": 1 00:12:54.145 }, 00:12:54.145 { 00:12:54.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.145 "dma_device_type": 2 00:12:54.145 } 00:12:54.145 ], 00:12:54.145 "driver_specific": {} 00:12:54.145 }' 00:12:54.145 18:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:54.145 18:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:54.145 18:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:54.145 18:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:54.404 18:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:54.404 18:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:54.404 18:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:54.404 18:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:54.404 18:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:54.404 18:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:54.404 18:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:54.663 18:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:54.663 18:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:54.663 18:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:54.663 18:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:54.922 18:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:54.922 "name": "BaseBdev2", 00:12:54.922 "aliases": [ 00:12:54.922 "f04ee6b3-fd79-4023-82d7-5f51ed7736b6" 00:12:54.922 ], 00:12:54.922 "product_name": "Malloc disk", 00:12:54.922 "block_size": 512, 00:12:54.922 "num_blocks": 65536, 00:12:54.922 "uuid": "f04ee6b3-fd79-4023-82d7-5f51ed7736b6", 00:12:54.922 "assigned_rate_limits": { 00:12:54.922 "rw_ios_per_sec": 0, 00:12:54.922 "rw_mbytes_per_sec": 0, 00:12:54.922 "r_mbytes_per_sec": 0, 00:12:54.922 "w_mbytes_per_sec": 0 00:12:54.922 }, 00:12:54.922 "claimed": true, 00:12:54.922 "claim_type": "exclusive_write", 00:12:54.922 "zoned": false, 00:12:54.922 "supported_io_types": { 00:12:54.922 "read": true, 00:12:54.922 "write": true, 00:12:54.922 "unmap": true, 00:12:54.922 "flush": true, 00:12:54.922 "reset": true, 00:12:54.922 "nvme_admin": false, 00:12:54.922 "nvme_io": false, 00:12:54.922 "nvme_io_md": false, 00:12:54.922 "write_zeroes": true, 00:12:54.922 "zcopy": true, 00:12:54.922 "get_zone_info": false, 00:12:54.922 "zone_management": false, 00:12:54.922 "zone_append": false, 00:12:54.922 "compare": false, 00:12:54.922 "compare_and_write": false, 00:12:54.922 "abort": true, 00:12:54.922 "seek_hole": false, 00:12:54.922 "seek_data": false, 00:12:54.922 "copy": true, 00:12:54.922 "nvme_iov_md": false 00:12:54.922 }, 00:12:54.922 "memory_domains": [ 00:12:54.922 { 00:12:54.922 "dma_device_id": "system", 00:12:54.922 "dma_device_type": 1 00:12:54.922 }, 00:12:54.922 { 00:12:54.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.922 "dma_device_type": 2 00:12:54.922 } 00:12:54.922 ], 00:12:54.922 "driver_specific": {} 00:12:54.922 }' 00:12:54.922 18:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:54.922 18:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:54.922 18:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:54.922 18:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:54.922 18:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:54.922 18:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:54.922 18:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:54.922 18:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:55.182 18:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:55.182 18:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:55.182 18:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:55.182 18:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:55.182 18:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:55.182 18:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:55.182 18:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:55.442 18:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:55.442 "name": "BaseBdev3", 00:12:55.442 "aliases": [ 00:12:55.442 "98ddebe7-19bd-4b53-8e37-8788e0592e94" 00:12:55.442 ], 00:12:55.442 "product_name": "Malloc disk", 00:12:55.442 "block_size": 512, 00:12:55.442 "num_blocks": 65536, 00:12:55.442 "uuid": "98ddebe7-19bd-4b53-8e37-8788e0592e94", 00:12:55.442 "assigned_rate_limits": { 00:12:55.442 "rw_ios_per_sec": 0, 00:12:55.442 "rw_mbytes_per_sec": 0, 00:12:55.442 "r_mbytes_per_sec": 0, 00:12:55.442 "w_mbytes_per_sec": 0 00:12:55.442 }, 00:12:55.442 "claimed": true, 00:12:55.442 "claim_type": "exclusive_write", 00:12:55.442 "zoned": false, 00:12:55.442 "supported_io_types": { 00:12:55.442 "read": true, 00:12:55.442 "write": true, 00:12:55.442 "unmap": true, 00:12:55.442 "flush": true, 00:12:55.442 "reset": true, 00:12:55.442 "nvme_admin": false, 00:12:55.442 "nvme_io": false, 00:12:55.442 "nvme_io_md": false, 00:12:55.442 "write_zeroes": true, 00:12:55.442 "zcopy": true, 00:12:55.442 "get_zone_info": false, 00:12:55.442 "zone_management": false, 00:12:55.442 "zone_append": false, 00:12:55.442 "compare": false, 00:12:55.442 "compare_and_write": false, 00:12:55.442 "abort": true, 00:12:55.442 "seek_hole": false, 00:12:55.442 "seek_data": false, 00:12:55.442 "copy": true, 00:12:55.442 "nvme_iov_md": false 00:12:55.442 }, 00:12:55.442 "memory_domains": [ 00:12:55.442 { 00:12:55.442 "dma_device_id": "system", 00:12:55.442 "dma_device_type": 1 00:12:55.442 }, 00:12:55.442 { 00:12:55.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.442 "dma_device_type": 2 00:12:55.442 } 00:12:55.442 ], 00:12:55.442 "driver_specific": {} 00:12:55.442 }' 00:12:55.442 18:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:55.442 18:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:55.442 18:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:55.442 18:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:55.700 18:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:55.700 18:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:55.700 18:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:55.700 18:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:55.700 18:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:55.700 18:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:55.700 18:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:55.700 18:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:55.700 18:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:55.958 [2024-07-15 18:27:41.483320] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:55.958 [2024-07-15 18:27:41.483347] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:55.958 [2024-07-15 18:27:41.483396] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:55.958 [2024-07-15 18:27:41.483446] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:55.958 [2024-07-15 18:27:41.483454] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb0da90 name Existed_Raid, state offline 00:12:55.958 18:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2782226 00:12:55.958 18:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2782226 ']' 00:12:55.958 18:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2782226 00:12:55.958 18:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:12:56.216 18:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:56.216 18:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2782226 00:12:56.216 18:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:56.216 18:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:56.216 18:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2782226' 00:12:56.216 killing process with pid 2782226 00:12:56.216 18:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2782226 00:12:56.216 [2024-07-15 18:27:41.552279] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:56.216 18:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2782226 00:12:56.216 [2024-07-15 18:27:41.578015] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:12:56.476 00:12:56.476 real 0m28.597s 00:12:56.476 user 0m53.762s 00:12:56.476 sys 0m3.992s 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.476 ************************************ 00:12:56.476 END TEST raid_state_function_test 00:12:56.476 ************************************ 00:12:56.476 18:27:41 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:56.476 18:27:41 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:12:56.476 18:27:41 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:56.476 18:27:41 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:56.476 18:27:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:56.476 ************************************ 00:12:56.476 START TEST raid_state_function_test_sb 00:12:56.476 ************************************ 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 true 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2787232 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2787232' 00:12:56.476 Process raid pid: 2787232 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2787232 /var/tmp/spdk-raid.sock 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2787232 ']' 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:56.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:56.476 18:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.476 [2024-07-15 18:27:41.883882] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:12:56.476 [2024-07-15 18:27:41.883944] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:56.476 [2024-07-15 18:27:41.985664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.736 [2024-07-15 18:27:42.077130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.736 [2024-07-15 18:27:42.137849] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:56.736 [2024-07-15 18:27:42.137883] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:57.302 18:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:57.302 18:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:12:57.302 18:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:57.560 [2024-07-15 18:27:43.073784] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:57.560 [2024-07-15 18:27:43.073825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:57.560 [2024-07-15 18:27:43.073834] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:57.560 [2024-07-15 18:27:43.073842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:57.560 [2024-07-15 18:27:43.073851] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:57.560 [2024-07-15 18:27:43.073860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:57.560 18:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:57.560 18:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:57.560 18:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:57.560 18:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:57.560 18:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:57.560 18:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:57.560 18:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:57.560 18:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:57.560 18:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:57.560 18:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:57.560 18:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:57.560 18:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.819 18:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:57.819 "name": "Existed_Raid", 00:12:57.819 "uuid": "620d837c-1cc9-48b6-9a55-985487ce2cb4", 00:12:57.819 "strip_size_kb": 64, 00:12:57.819 "state": "configuring", 00:12:57.819 "raid_level": "raid0", 00:12:57.819 "superblock": true, 00:12:57.819 "num_base_bdevs": 3, 00:12:57.819 "num_base_bdevs_discovered": 0, 00:12:57.819 "num_base_bdevs_operational": 3, 00:12:57.819 "base_bdevs_list": [ 00:12:57.819 { 00:12:57.819 "name": "BaseBdev1", 00:12:57.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.819 "is_configured": false, 00:12:57.819 "data_offset": 0, 00:12:57.819 "data_size": 0 00:12:57.819 }, 00:12:57.819 { 00:12:57.819 "name": "BaseBdev2", 00:12:57.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.819 "is_configured": false, 00:12:57.819 "data_offset": 0, 00:12:57.819 "data_size": 0 00:12:57.819 }, 00:12:57.819 { 00:12:57.819 "name": "BaseBdev3", 00:12:57.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.819 "is_configured": false, 00:12:57.819 "data_offset": 0, 00:12:57.819 "data_size": 0 00:12:57.819 } 00:12:57.819 ] 00:12:57.819 }' 00:12:57.819 18:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:57.819 18:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.755 18:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:58.755 [2024-07-15 18:27:44.220715] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:58.755 [2024-07-15 18:27:44.220747] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21b0ba0 name Existed_Raid, state configuring 00:12:58.755 18:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:59.016 [2024-07-15 18:27:44.481443] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:59.016 [2024-07-15 18:27:44.481471] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:59.016 [2024-07-15 18:27:44.481479] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:59.016 [2024-07-15 18:27:44.481488] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:59.016 [2024-07-15 18:27:44.481495] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:59.016 [2024-07-15 18:27:44.481503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:59.016 18:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:59.300 [2024-07-15 18:27:44.751621] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:59.300 BaseBdev1 00:12:59.300 18:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:59.300 18:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:59.300 18:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:59.300 18:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:59.300 18:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:59.300 18:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:59.300 18:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:59.580 18:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:59.839 [ 00:12:59.839 { 00:12:59.839 "name": "BaseBdev1", 00:12:59.839 "aliases": [ 00:12:59.839 "2ef67c6e-0d2d-4f75-babc-7b2d2c136d42" 00:12:59.839 ], 00:12:59.839 "product_name": "Malloc disk", 00:12:59.839 "block_size": 512, 00:12:59.839 "num_blocks": 65536, 00:12:59.839 "uuid": "2ef67c6e-0d2d-4f75-babc-7b2d2c136d42", 00:12:59.839 "assigned_rate_limits": { 00:12:59.839 "rw_ios_per_sec": 0, 00:12:59.840 "rw_mbytes_per_sec": 0, 00:12:59.840 "r_mbytes_per_sec": 0, 00:12:59.840 "w_mbytes_per_sec": 0 00:12:59.840 }, 00:12:59.840 "claimed": true, 00:12:59.840 "claim_type": "exclusive_write", 00:12:59.840 "zoned": false, 00:12:59.840 "supported_io_types": { 00:12:59.840 "read": true, 00:12:59.840 "write": true, 00:12:59.840 "unmap": true, 00:12:59.840 "flush": true, 00:12:59.840 "reset": true, 00:12:59.840 "nvme_admin": false, 00:12:59.840 "nvme_io": false, 00:12:59.840 "nvme_io_md": false, 00:12:59.840 "write_zeroes": true, 00:12:59.840 "zcopy": true, 00:12:59.840 "get_zone_info": false, 00:12:59.840 "zone_management": false, 00:12:59.840 "zone_append": false, 00:12:59.840 "compare": false, 00:12:59.840 "compare_and_write": false, 00:12:59.840 "abort": true, 00:12:59.840 "seek_hole": false, 00:12:59.840 "seek_data": false, 00:12:59.840 "copy": true, 00:12:59.840 "nvme_iov_md": false 00:12:59.840 }, 00:12:59.840 "memory_domains": [ 00:12:59.840 { 00:12:59.840 "dma_device_id": "system", 00:12:59.840 "dma_device_type": 1 00:12:59.840 }, 00:12:59.840 { 00:12:59.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.840 "dma_device_type": 2 00:12:59.840 } 00:12:59.840 ], 00:12:59.840 "driver_specific": {} 00:12:59.840 } 00:12:59.840 ] 00:12:59.840 18:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:59.840 18:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:59.840 18:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:59.840 18:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:59.840 18:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:59.840 18:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:59.840 18:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:59.840 18:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:59.840 18:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:59.840 18:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:59.840 18:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:59.840 18:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:59.840 18:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.099 18:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:00.099 "name": "Existed_Raid", 00:13:00.099 "uuid": "4f5b34f5-a5d7-4c47-bfa5-331a0dfcfa02", 00:13:00.099 "strip_size_kb": 64, 00:13:00.099 "state": "configuring", 00:13:00.099 "raid_level": "raid0", 00:13:00.099 "superblock": true, 00:13:00.099 "num_base_bdevs": 3, 00:13:00.099 "num_base_bdevs_discovered": 1, 00:13:00.099 "num_base_bdevs_operational": 3, 00:13:00.099 "base_bdevs_list": [ 00:13:00.099 { 00:13:00.099 "name": "BaseBdev1", 00:13:00.099 "uuid": "2ef67c6e-0d2d-4f75-babc-7b2d2c136d42", 00:13:00.099 "is_configured": true, 00:13:00.099 "data_offset": 2048, 00:13:00.099 "data_size": 63488 00:13:00.099 }, 00:13:00.099 { 00:13:00.099 "name": "BaseBdev2", 00:13:00.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.099 "is_configured": false, 00:13:00.099 "data_offset": 0, 00:13:00.099 "data_size": 0 00:13:00.099 }, 00:13:00.099 { 00:13:00.099 "name": "BaseBdev3", 00:13:00.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.099 "is_configured": false, 00:13:00.099 "data_offset": 0, 00:13:00.099 "data_size": 0 00:13:00.099 } 00:13:00.099 ] 00:13:00.099 }' 00:13:00.099 18:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:00.099 18:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.667 18:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:00.924 [2024-07-15 18:27:46.420129] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:00.924 [2024-07-15 18:27:46.420167] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21b0470 name Existed_Raid, state configuring 00:13:00.925 18:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:01.183 [2024-07-15 18:27:46.680865] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:01.183 [2024-07-15 18:27:46.682364] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:01.183 [2024-07-15 18:27:46.682394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:01.183 [2024-07-15 18:27:46.682403] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:01.183 [2024-07-15 18:27:46.682411] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:01.183 18:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:01.183 18:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:01.183 18:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:01.183 18:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:01.183 18:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:01.183 18:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:01.183 18:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:01.183 18:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:01.183 18:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:01.183 18:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:01.183 18:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:01.183 18:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:01.183 18:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:01.183 18:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.442 18:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:01.442 "name": "Existed_Raid", 00:13:01.442 "uuid": "a16279db-e1c4-4012-86d0-cfdb94daf133", 00:13:01.442 "strip_size_kb": 64, 00:13:01.442 "state": "configuring", 00:13:01.442 "raid_level": "raid0", 00:13:01.442 "superblock": true, 00:13:01.442 "num_base_bdevs": 3, 00:13:01.442 "num_base_bdevs_discovered": 1, 00:13:01.442 "num_base_bdevs_operational": 3, 00:13:01.442 "base_bdevs_list": [ 00:13:01.442 { 00:13:01.442 "name": "BaseBdev1", 00:13:01.442 "uuid": "2ef67c6e-0d2d-4f75-babc-7b2d2c136d42", 00:13:01.442 "is_configured": true, 00:13:01.442 "data_offset": 2048, 00:13:01.442 "data_size": 63488 00:13:01.442 }, 00:13:01.442 { 00:13:01.442 "name": "BaseBdev2", 00:13:01.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.442 "is_configured": false, 00:13:01.442 "data_offset": 0, 00:13:01.442 "data_size": 0 00:13:01.442 }, 00:13:01.442 { 00:13:01.442 "name": "BaseBdev3", 00:13:01.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.442 "is_configured": false, 00:13:01.442 "data_offset": 0, 00:13:01.442 "data_size": 0 00:13:01.442 } 00:13:01.442 ] 00:13:01.442 }' 00:13:01.442 18:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:01.442 18:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.380 18:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:02.380 [2024-07-15 18:27:47.883532] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:02.380 BaseBdev2 00:13:02.380 18:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:02.380 18:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:02.380 18:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:02.380 18:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:02.380 18:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:02.380 18:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:02.380 18:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:02.639 18:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:02.898 [ 00:13:02.898 { 00:13:02.898 "name": "BaseBdev2", 00:13:02.898 "aliases": [ 00:13:02.898 "9ec196a1-7692-43fc-84bb-9f1e2f27004f" 00:13:02.898 ], 00:13:02.898 "product_name": "Malloc disk", 00:13:02.898 "block_size": 512, 00:13:02.898 "num_blocks": 65536, 00:13:02.898 "uuid": "9ec196a1-7692-43fc-84bb-9f1e2f27004f", 00:13:02.898 "assigned_rate_limits": { 00:13:02.898 "rw_ios_per_sec": 0, 00:13:02.898 "rw_mbytes_per_sec": 0, 00:13:02.898 "r_mbytes_per_sec": 0, 00:13:02.898 "w_mbytes_per_sec": 0 00:13:02.898 }, 00:13:02.898 "claimed": true, 00:13:02.898 "claim_type": "exclusive_write", 00:13:02.898 "zoned": false, 00:13:02.898 "supported_io_types": { 00:13:02.898 "read": true, 00:13:02.898 "write": true, 00:13:02.898 "unmap": true, 00:13:02.898 "flush": true, 00:13:02.898 "reset": true, 00:13:02.898 "nvme_admin": false, 00:13:02.898 "nvme_io": false, 00:13:02.898 "nvme_io_md": false, 00:13:02.898 "write_zeroes": true, 00:13:02.898 "zcopy": true, 00:13:02.898 "get_zone_info": false, 00:13:02.898 "zone_management": false, 00:13:02.898 "zone_append": false, 00:13:02.898 "compare": false, 00:13:02.898 "compare_and_write": false, 00:13:02.898 "abort": true, 00:13:02.898 "seek_hole": false, 00:13:02.898 "seek_data": false, 00:13:02.898 "copy": true, 00:13:02.898 "nvme_iov_md": false 00:13:02.898 }, 00:13:02.898 "memory_domains": [ 00:13:02.898 { 00:13:02.898 "dma_device_id": "system", 00:13:02.898 "dma_device_type": 1 00:13:02.898 }, 00:13:02.898 { 00:13:02.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.898 "dma_device_type": 2 00:13:02.898 } 00:13:02.898 ], 00:13:02.898 "driver_specific": {} 00:13:02.898 } 00:13:02.898 ] 00:13:02.898 18:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:02.898 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:02.898 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:02.898 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:02.898 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:02.898 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:02.898 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:02.898 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:02.898 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:02.898 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:02.898 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:02.898 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:02.898 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:02.898 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:02.898 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.157 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:03.157 "name": "Existed_Raid", 00:13:03.157 "uuid": "a16279db-e1c4-4012-86d0-cfdb94daf133", 00:13:03.157 "strip_size_kb": 64, 00:13:03.157 "state": "configuring", 00:13:03.157 "raid_level": "raid0", 00:13:03.157 "superblock": true, 00:13:03.157 "num_base_bdevs": 3, 00:13:03.157 "num_base_bdevs_discovered": 2, 00:13:03.157 "num_base_bdevs_operational": 3, 00:13:03.157 "base_bdevs_list": [ 00:13:03.157 { 00:13:03.157 "name": "BaseBdev1", 00:13:03.157 "uuid": "2ef67c6e-0d2d-4f75-babc-7b2d2c136d42", 00:13:03.157 "is_configured": true, 00:13:03.157 "data_offset": 2048, 00:13:03.157 "data_size": 63488 00:13:03.157 }, 00:13:03.157 { 00:13:03.157 "name": "BaseBdev2", 00:13:03.157 "uuid": "9ec196a1-7692-43fc-84bb-9f1e2f27004f", 00:13:03.157 "is_configured": true, 00:13:03.157 "data_offset": 2048, 00:13:03.157 "data_size": 63488 00:13:03.157 }, 00:13:03.157 { 00:13:03.157 "name": "BaseBdev3", 00:13:03.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.157 "is_configured": false, 00:13:03.157 "data_offset": 0, 00:13:03.157 "data_size": 0 00:13:03.157 } 00:13:03.157 ] 00:13:03.157 }' 00:13:03.157 18:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:03.157 18:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.092 18:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:04.092 [2024-07-15 18:27:49.579364] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:04.092 [2024-07-15 18:27:49.579528] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x21b1360 00:13:04.092 [2024-07-15 18:27:49.579541] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:04.092 [2024-07-15 18:27:49.579731] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1eb67d0 00:13:04.092 [2024-07-15 18:27:49.579852] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x21b1360 00:13:04.092 [2024-07-15 18:27:49.579861] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x21b1360 00:13:04.092 [2024-07-15 18:27:49.579971] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.092 BaseBdev3 00:13:04.092 18:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:13:04.092 18:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:13:04.092 18:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:04.092 18:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:04.092 18:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:04.092 18:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:04.092 18:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:04.350 18:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:04.606 [ 00:13:04.606 { 00:13:04.606 "name": "BaseBdev3", 00:13:04.606 "aliases": [ 00:13:04.606 "e9a51dfd-4386-42e7-be27-4940be23b4fc" 00:13:04.606 ], 00:13:04.606 "product_name": "Malloc disk", 00:13:04.606 "block_size": 512, 00:13:04.606 "num_blocks": 65536, 00:13:04.606 "uuid": "e9a51dfd-4386-42e7-be27-4940be23b4fc", 00:13:04.606 "assigned_rate_limits": { 00:13:04.606 "rw_ios_per_sec": 0, 00:13:04.606 "rw_mbytes_per_sec": 0, 00:13:04.606 "r_mbytes_per_sec": 0, 00:13:04.606 "w_mbytes_per_sec": 0 00:13:04.606 }, 00:13:04.606 "claimed": true, 00:13:04.606 "claim_type": "exclusive_write", 00:13:04.606 "zoned": false, 00:13:04.606 "supported_io_types": { 00:13:04.606 "read": true, 00:13:04.606 "write": true, 00:13:04.606 "unmap": true, 00:13:04.606 "flush": true, 00:13:04.606 "reset": true, 00:13:04.606 "nvme_admin": false, 00:13:04.606 "nvme_io": false, 00:13:04.606 "nvme_io_md": false, 00:13:04.606 "write_zeroes": true, 00:13:04.606 "zcopy": true, 00:13:04.606 "get_zone_info": false, 00:13:04.606 "zone_management": false, 00:13:04.606 "zone_append": false, 00:13:04.606 "compare": false, 00:13:04.606 "compare_and_write": false, 00:13:04.606 "abort": true, 00:13:04.606 "seek_hole": false, 00:13:04.606 "seek_data": false, 00:13:04.606 "copy": true, 00:13:04.606 "nvme_iov_md": false 00:13:04.606 }, 00:13:04.606 "memory_domains": [ 00:13:04.606 { 00:13:04.606 "dma_device_id": "system", 00:13:04.606 "dma_device_type": 1 00:13:04.606 }, 00:13:04.606 { 00:13:04.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.606 "dma_device_type": 2 00:13:04.606 } 00:13:04.606 ], 00:13:04.606 "driver_specific": {} 00:13:04.606 } 00:13:04.606 ] 00:13:04.606 18:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:04.606 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:04.606 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:04.606 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:13:04.606 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:04.606 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:04.606 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:04.606 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:04.606 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:04.606 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:04.606 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:04.606 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:04.606 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:04.606 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:04.606 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.864 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:04.864 "name": "Existed_Raid", 00:13:04.864 "uuid": "a16279db-e1c4-4012-86d0-cfdb94daf133", 00:13:04.864 "strip_size_kb": 64, 00:13:04.864 "state": "online", 00:13:04.864 "raid_level": "raid0", 00:13:04.864 "superblock": true, 00:13:04.864 "num_base_bdevs": 3, 00:13:04.864 "num_base_bdevs_discovered": 3, 00:13:04.864 "num_base_bdevs_operational": 3, 00:13:04.864 "base_bdevs_list": [ 00:13:04.864 { 00:13:04.864 "name": "BaseBdev1", 00:13:04.864 "uuid": "2ef67c6e-0d2d-4f75-babc-7b2d2c136d42", 00:13:04.864 "is_configured": true, 00:13:04.864 "data_offset": 2048, 00:13:04.864 "data_size": 63488 00:13:04.864 }, 00:13:04.864 { 00:13:04.864 "name": "BaseBdev2", 00:13:04.864 "uuid": "9ec196a1-7692-43fc-84bb-9f1e2f27004f", 00:13:04.864 "is_configured": true, 00:13:04.864 "data_offset": 2048, 00:13:04.864 "data_size": 63488 00:13:04.864 }, 00:13:04.864 { 00:13:04.864 "name": "BaseBdev3", 00:13:04.864 "uuid": "e9a51dfd-4386-42e7-be27-4940be23b4fc", 00:13:04.864 "is_configured": true, 00:13:04.864 "data_offset": 2048, 00:13:04.864 "data_size": 63488 00:13:04.864 } 00:13:04.864 ] 00:13:04.864 }' 00:13:04.864 18:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:04.864 18:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.821 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:05.821 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:05.821 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:05.821 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:05.821 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:05.821 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:05.821 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:05.821 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:05.821 [2024-07-15 18:27:51.280276] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:05.821 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:05.821 "name": "Existed_Raid", 00:13:05.821 "aliases": [ 00:13:05.821 "a16279db-e1c4-4012-86d0-cfdb94daf133" 00:13:05.821 ], 00:13:05.821 "product_name": "Raid Volume", 00:13:05.821 "block_size": 512, 00:13:05.821 "num_blocks": 190464, 00:13:05.821 "uuid": "a16279db-e1c4-4012-86d0-cfdb94daf133", 00:13:05.821 "assigned_rate_limits": { 00:13:05.821 "rw_ios_per_sec": 0, 00:13:05.821 "rw_mbytes_per_sec": 0, 00:13:05.821 "r_mbytes_per_sec": 0, 00:13:05.821 "w_mbytes_per_sec": 0 00:13:05.821 }, 00:13:05.821 "claimed": false, 00:13:05.821 "zoned": false, 00:13:05.821 "supported_io_types": { 00:13:05.821 "read": true, 00:13:05.821 "write": true, 00:13:05.821 "unmap": true, 00:13:05.821 "flush": true, 00:13:05.821 "reset": true, 00:13:05.821 "nvme_admin": false, 00:13:05.821 "nvme_io": false, 00:13:05.821 "nvme_io_md": false, 00:13:05.821 "write_zeroes": true, 00:13:05.821 "zcopy": false, 00:13:05.821 "get_zone_info": false, 00:13:05.821 "zone_management": false, 00:13:05.821 "zone_append": false, 00:13:05.821 "compare": false, 00:13:05.821 "compare_and_write": false, 00:13:05.821 "abort": false, 00:13:05.821 "seek_hole": false, 00:13:05.821 "seek_data": false, 00:13:05.821 "copy": false, 00:13:05.821 "nvme_iov_md": false 00:13:05.821 }, 00:13:05.821 "memory_domains": [ 00:13:05.821 { 00:13:05.821 "dma_device_id": "system", 00:13:05.821 "dma_device_type": 1 00:13:05.821 }, 00:13:05.821 { 00:13:05.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.821 "dma_device_type": 2 00:13:05.821 }, 00:13:05.821 { 00:13:05.821 "dma_device_id": "system", 00:13:05.821 "dma_device_type": 1 00:13:05.821 }, 00:13:05.821 { 00:13:05.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.821 "dma_device_type": 2 00:13:05.821 }, 00:13:05.821 { 00:13:05.821 "dma_device_id": "system", 00:13:05.821 "dma_device_type": 1 00:13:05.821 }, 00:13:05.821 { 00:13:05.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.821 "dma_device_type": 2 00:13:05.821 } 00:13:05.821 ], 00:13:05.821 "driver_specific": { 00:13:05.821 "raid": { 00:13:05.821 "uuid": "a16279db-e1c4-4012-86d0-cfdb94daf133", 00:13:05.821 "strip_size_kb": 64, 00:13:05.821 "state": "online", 00:13:05.821 "raid_level": "raid0", 00:13:05.821 "superblock": true, 00:13:05.821 "num_base_bdevs": 3, 00:13:05.821 "num_base_bdevs_discovered": 3, 00:13:05.821 "num_base_bdevs_operational": 3, 00:13:05.821 "base_bdevs_list": [ 00:13:05.821 { 00:13:05.821 "name": "BaseBdev1", 00:13:05.821 "uuid": "2ef67c6e-0d2d-4f75-babc-7b2d2c136d42", 00:13:05.821 "is_configured": true, 00:13:05.821 "data_offset": 2048, 00:13:05.821 "data_size": 63488 00:13:05.821 }, 00:13:05.821 { 00:13:05.821 "name": "BaseBdev2", 00:13:05.821 "uuid": "9ec196a1-7692-43fc-84bb-9f1e2f27004f", 00:13:05.821 "is_configured": true, 00:13:05.821 "data_offset": 2048, 00:13:05.821 "data_size": 63488 00:13:05.821 }, 00:13:05.821 { 00:13:05.821 "name": "BaseBdev3", 00:13:05.821 "uuid": "e9a51dfd-4386-42e7-be27-4940be23b4fc", 00:13:05.821 "is_configured": true, 00:13:05.821 "data_offset": 2048, 00:13:05.821 "data_size": 63488 00:13:05.821 } 00:13:05.821 ] 00:13:05.821 } 00:13:05.821 } 00:13:05.821 }' 00:13:05.821 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:05.821 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:05.821 BaseBdev2 00:13:05.821 BaseBdev3' 00:13:05.821 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:05.821 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:05.821 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:06.079 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:06.079 "name": "BaseBdev1", 00:13:06.079 "aliases": [ 00:13:06.079 "2ef67c6e-0d2d-4f75-babc-7b2d2c136d42" 00:13:06.079 ], 00:13:06.079 "product_name": "Malloc disk", 00:13:06.079 "block_size": 512, 00:13:06.079 "num_blocks": 65536, 00:13:06.079 "uuid": "2ef67c6e-0d2d-4f75-babc-7b2d2c136d42", 00:13:06.079 "assigned_rate_limits": { 00:13:06.079 "rw_ios_per_sec": 0, 00:13:06.079 "rw_mbytes_per_sec": 0, 00:13:06.079 "r_mbytes_per_sec": 0, 00:13:06.079 "w_mbytes_per_sec": 0 00:13:06.079 }, 00:13:06.079 "claimed": true, 00:13:06.079 "claim_type": "exclusive_write", 00:13:06.079 "zoned": false, 00:13:06.079 "supported_io_types": { 00:13:06.079 "read": true, 00:13:06.079 "write": true, 00:13:06.079 "unmap": true, 00:13:06.079 "flush": true, 00:13:06.079 "reset": true, 00:13:06.079 "nvme_admin": false, 00:13:06.079 "nvme_io": false, 00:13:06.079 "nvme_io_md": false, 00:13:06.079 "write_zeroes": true, 00:13:06.079 "zcopy": true, 00:13:06.079 "get_zone_info": false, 00:13:06.079 "zone_management": false, 00:13:06.079 "zone_append": false, 00:13:06.079 "compare": false, 00:13:06.079 "compare_and_write": false, 00:13:06.079 "abort": true, 00:13:06.079 "seek_hole": false, 00:13:06.079 "seek_data": false, 00:13:06.079 "copy": true, 00:13:06.079 "nvme_iov_md": false 00:13:06.079 }, 00:13:06.079 "memory_domains": [ 00:13:06.079 { 00:13:06.079 "dma_device_id": "system", 00:13:06.079 "dma_device_type": 1 00:13:06.079 }, 00:13:06.079 { 00:13:06.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.079 "dma_device_type": 2 00:13:06.079 } 00:13:06.079 ], 00:13:06.079 "driver_specific": {} 00:13:06.079 }' 00:13:06.079 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:06.337 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:06.337 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:06.337 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:06.337 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:06.337 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:06.337 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:06.337 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:06.594 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:06.594 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:06.594 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:06.594 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:06.594 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:06.594 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:06.594 18:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:06.851 18:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:06.851 "name": "BaseBdev2", 00:13:06.851 "aliases": [ 00:13:06.851 "9ec196a1-7692-43fc-84bb-9f1e2f27004f" 00:13:06.851 ], 00:13:06.851 "product_name": "Malloc disk", 00:13:06.851 "block_size": 512, 00:13:06.851 "num_blocks": 65536, 00:13:06.851 "uuid": "9ec196a1-7692-43fc-84bb-9f1e2f27004f", 00:13:06.851 "assigned_rate_limits": { 00:13:06.851 "rw_ios_per_sec": 0, 00:13:06.851 "rw_mbytes_per_sec": 0, 00:13:06.851 "r_mbytes_per_sec": 0, 00:13:06.851 "w_mbytes_per_sec": 0 00:13:06.851 }, 00:13:06.851 "claimed": true, 00:13:06.851 "claim_type": "exclusive_write", 00:13:06.851 "zoned": false, 00:13:06.851 "supported_io_types": { 00:13:06.851 "read": true, 00:13:06.851 "write": true, 00:13:06.851 "unmap": true, 00:13:06.851 "flush": true, 00:13:06.851 "reset": true, 00:13:06.851 "nvme_admin": false, 00:13:06.851 "nvme_io": false, 00:13:06.851 "nvme_io_md": false, 00:13:06.851 "write_zeroes": true, 00:13:06.851 "zcopy": true, 00:13:06.851 "get_zone_info": false, 00:13:06.851 "zone_management": false, 00:13:06.851 "zone_append": false, 00:13:06.851 "compare": false, 00:13:06.851 "compare_and_write": false, 00:13:06.851 "abort": true, 00:13:06.851 "seek_hole": false, 00:13:06.851 "seek_data": false, 00:13:06.851 "copy": true, 00:13:06.851 "nvme_iov_md": false 00:13:06.851 }, 00:13:06.851 "memory_domains": [ 00:13:06.851 { 00:13:06.851 "dma_device_id": "system", 00:13:06.851 "dma_device_type": 1 00:13:06.851 }, 00:13:06.851 { 00:13:06.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.851 "dma_device_type": 2 00:13:06.851 } 00:13:06.851 ], 00:13:06.851 "driver_specific": {} 00:13:06.851 }' 00:13:06.851 18:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:06.851 18:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:06.851 18:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:06.851 18:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:06.851 18:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:07.109 18:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:07.109 18:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:07.109 18:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:07.109 18:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:07.109 18:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:07.109 18:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:07.109 18:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:07.109 18:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:07.109 18:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:07.109 18:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:07.368 18:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:07.368 "name": "BaseBdev3", 00:13:07.368 "aliases": [ 00:13:07.368 "e9a51dfd-4386-42e7-be27-4940be23b4fc" 00:13:07.368 ], 00:13:07.368 "product_name": "Malloc disk", 00:13:07.368 "block_size": 512, 00:13:07.368 "num_blocks": 65536, 00:13:07.368 "uuid": "e9a51dfd-4386-42e7-be27-4940be23b4fc", 00:13:07.368 "assigned_rate_limits": { 00:13:07.368 "rw_ios_per_sec": 0, 00:13:07.368 "rw_mbytes_per_sec": 0, 00:13:07.368 "r_mbytes_per_sec": 0, 00:13:07.368 "w_mbytes_per_sec": 0 00:13:07.368 }, 00:13:07.368 "claimed": true, 00:13:07.368 "claim_type": "exclusive_write", 00:13:07.368 "zoned": false, 00:13:07.368 "supported_io_types": { 00:13:07.368 "read": true, 00:13:07.368 "write": true, 00:13:07.368 "unmap": true, 00:13:07.368 "flush": true, 00:13:07.368 "reset": true, 00:13:07.368 "nvme_admin": false, 00:13:07.368 "nvme_io": false, 00:13:07.368 "nvme_io_md": false, 00:13:07.368 "write_zeroes": true, 00:13:07.368 "zcopy": true, 00:13:07.368 "get_zone_info": false, 00:13:07.368 "zone_management": false, 00:13:07.368 "zone_append": false, 00:13:07.368 "compare": false, 00:13:07.368 "compare_and_write": false, 00:13:07.368 "abort": true, 00:13:07.368 "seek_hole": false, 00:13:07.368 "seek_data": false, 00:13:07.368 "copy": true, 00:13:07.368 "nvme_iov_md": false 00:13:07.368 }, 00:13:07.368 "memory_domains": [ 00:13:07.368 { 00:13:07.368 "dma_device_id": "system", 00:13:07.368 "dma_device_type": 1 00:13:07.368 }, 00:13:07.368 { 00:13:07.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.368 "dma_device_type": 2 00:13:07.368 } 00:13:07.368 ], 00:13:07.368 "driver_specific": {} 00:13:07.368 }' 00:13:07.368 18:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:07.368 18:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:07.626 18:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:07.627 18:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:07.627 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:07.627 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:07.627 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:07.884 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:07.884 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:07.884 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:07.884 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:07.884 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:07.884 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:08.452 [2024-07-15 18:27:53.818880] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:08.452 [2024-07-15 18:27:53.818908] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:08.452 [2024-07-15 18:27:53.818956] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:08.452 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:08.452 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:13:08.452 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:08.452 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:13:08.452 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:13:08.452 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:13:08.452 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:08.452 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:13:08.452 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:08.452 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:08.452 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:08.452 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:08.452 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:08.452 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:08.452 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:08.452 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:08.452 18:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:08.711 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:08.711 "name": "Existed_Raid", 00:13:08.711 "uuid": "a16279db-e1c4-4012-86d0-cfdb94daf133", 00:13:08.711 "strip_size_kb": 64, 00:13:08.711 "state": "offline", 00:13:08.711 "raid_level": "raid0", 00:13:08.711 "superblock": true, 00:13:08.711 "num_base_bdevs": 3, 00:13:08.711 "num_base_bdevs_discovered": 2, 00:13:08.711 "num_base_bdevs_operational": 2, 00:13:08.711 "base_bdevs_list": [ 00:13:08.711 { 00:13:08.711 "name": null, 00:13:08.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.711 "is_configured": false, 00:13:08.711 "data_offset": 2048, 00:13:08.711 "data_size": 63488 00:13:08.711 }, 00:13:08.711 { 00:13:08.711 "name": "BaseBdev2", 00:13:08.711 "uuid": "9ec196a1-7692-43fc-84bb-9f1e2f27004f", 00:13:08.711 "is_configured": true, 00:13:08.711 "data_offset": 2048, 00:13:08.711 "data_size": 63488 00:13:08.711 }, 00:13:08.711 { 00:13:08.711 "name": "BaseBdev3", 00:13:08.711 "uuid": "e9a51dfd-4386-42e7-be27-4940be23b4fc", 00:13:08.711 "is_configured": true, 00:13:08.711 "data_offset": 2048, 00:13:08.711 "data_size": 63488 00:13:08.711 } 00:13:08.711 ] 00:13:08.711 }' 00:13:08.711 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:08.711 18:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.278 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:09.278 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:09.278 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:09.278 18:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:09.536 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:09.536 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:09.536 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:09.795 [2024-07-15 18:27:55.288230] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:09.795 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:09.795 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:09.795 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:09.795 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:10.054 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:10.054 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:10.054 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:10.312 [2024-07-15 18:27:55.812352] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:10.312 [2024-07-15 18:27:55.812396] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21b1360 name Existed_Raid, state offline 00:13:10.312 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:10.312 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:10.312 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:10.312 18:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:10.571 18:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:10.571 18:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:10.571 18:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:13:10.571 18:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:13:10.571 18:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:10.571 18:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:10.830 BaseBdev2 00:13:10.830 18:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:13:10.830 18:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:10.830 18:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:10.830 18:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:10.830 18:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:10.830 18:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:10.830 18:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:11.089 18:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:11.348 [ 00:13:11.348 { 00:13:11.348 "name": "BaseBdev2", 00:13:11.348 "aliases": [ 00:13:11.348 "8809be5f-eee4-47bc-915f-325b169a062e" 00:13:11.348 ], 00:13:11.348 "product_name": "Malloc disk", 00:13:11.348 "block_size": 512, 00:13:11.348 "num_blocks": 65536, 00:13:11.348 "uuid": "8809be5f-eee4-47bc-915f-325b169a062e", 00:13:11.348 "assigned_rate_limits": { 00:13:11.348 "rw_ios_per_sec": 0, 00:13:11.348 "rw_mbytes_per_sec": 0, 00:13:11.348 "r_mbytes_per_sec": 0, 00:13:11.348 "w_mbytes_per_sec": 0 00:13:11.348 }, 00:13:11.348 "claimed": false, 00:13:11.348 "zoned": false, 00:13:11.348 "supported_io_types": { 00:13:11.348 "read": true, 00:13:11.348 "write": true, 00:13:11.348 "unmap": true, 00:13:11.348 "flush": true, 00:13:11.348 "reset": true, 00:13:11.348 "nvme_admin": false, 00:13:11.348 "nvme_io": false, 00:13:11.348 "nvme_io_md": false, 00:13:11.348 "write_zeroes": true, 00:13:11.348 "zcopy": true, 00:13:11.348 "get_zone_info": false, 00:13:11.348 "zone_management": false, 00:13:11.348 "zone_append": false, 00:13:11.348 "compare": false, 00:13:11.348 "compare_and_write": false, 00:13:11.348 "abort": true, 00:13:11.348 "seek_hole": false, 00:13:11.348 "seek_data": false, 00:13:11.348 "copy": true, 00:13:11.348 "nvme_iov_md": false 00:13:11.348 }, 00:13:11.348 "memory_domains": [ 00:13:11.348 { 00:13:11.348 "dma_device_id": "system", 00:13:11.348 "dma_device_type": 1 00:13:11.348 }, 00:13:11.348 { 00:13:11.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.348 "dma_device_type": 2 00:13:11.348 } 00:13:11.348 ], 00:13:11.348 "driver_specific": {} 00:13:11.348 } 00:13:11.348 ] 00:13:11.348 18:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:11.348 18:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:11.348 18:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:11.348 18:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:11.607 BaseBdev3 00:13:11.607 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:13:11.607 18:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:13:11.607 18:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:11.607 18:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:11.607 18:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:11.607 18:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:11.607 18:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:11.865 18:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:12.124 [ 00:13:12.124 { 00:13:12.124 "name": "BaseBdev3", 00:13:12.124 "aliases": [ 00:13:12.124 "6405b4cf-5044-4687-addd-3c0042a1fd39" 00:13:12.124 ], 00:13:12.124 "product_name": "Malloc disk", 00:13:12.124 "block_size": 512, 00:13:12.124 "num_blocks": 65536, 00:13:12.124 "uuid": "6405b4cf-5044-4687-addd-3c0042a1fd39", 00:13:12.124 "assigned_rate_limits": { 00:13:12.124 "rw_ios_per_sec": 0, 00:13:12.124 "rw_mbytes_per_sec": 0, 00:13:12.124 "r_mbytes_per_sec": 0, 00:13:12.124 "w_mbytes_per_sec": 0 00:13:12.124 }, 00:13:12.124 "claimed": false, 00:13:12.124 "zoned": false, 00:13:12.124 "supported_io_types": { 00:13:12.124 "read": true, 00:13:12.124 "write": true, 00:13:12.124 "unmap": true, 00:13:12.124 "flush": true, 00:13:12.124 "reset": true, 00:13:12.124 "nvme_admin": false, 00:13:12.124 "nvme_io": false, 00:13:12.124 "nvme_io_md": false, 00:13:12.124 "write_zeroes": true, 00:13:12.124 "zcopy": true, 00:13:12.124 "get_zone_info": false, 00:13:12.124 "zone_management": false, 00:13:12.124 "zone_append": false, 00:13:12.124 "compare": false, 00:13:12.124 "compare_and_write": false, 00:13:12.124 "abort": true, 00:13:12.124 "seek_hole": false, 00:13:12.124 "seek_data": false, 00:13:12.124 "copy": true, 00:13:12.124 "nvme_iov_md": false 00:13:12.124 }, 00:13:12.124 "memory_domains": [ 00:13:12.124 { 00:13:12.124 "dma_device_id": "system", 00:13:12.124 "dma_device_type": 1 00:13:12.124 }, 00:13:12.124 { 00:13:12.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.124 "dma_device_type": 2 00:13:12.124 } 00:13:12.124 ], 00:13:12.124 "driver_specific": {} 00:13:12.124 } 00:13:12.124 ] 00:13:12.124 18:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:12.124 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:12.124 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:12.124 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:12.383 [2024-07-15 18:27:57.861308] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:12.383 [2024-07-15 18:27:57.861348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:12.383 [2024-07-15 18:27:57.861366] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:12.383 [2024-07-15 18:27:57.862757] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:12.383 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:12.383 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:12.383 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:12.383 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:12.383 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:12.383 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:12.383 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:12.383 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:12.383 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:12.383 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:12.383 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:12.383 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.642 18:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:12.642 "name": "Existed_Raid", 00:13:12.642 "uuid": "b093a55f-d992-4ea4-b806-30bbd136bef6", 00:13:12.642 "strip_size_kb": 64, 00:13:12.642 "state": "configuring", 00:13:12.642 "raid_level": "raid0", 00:13:12.642 "superblock": true, 00:13:12.642 "num_base_bdevs": 3, 00:13:12.642 "num_base_bdevs_discovered": 2, 00:13:12.642 "num_base_bdevs_operational": 3, 00:13:12.642 "base_bdevs_list": [ 00:13:12.642 { 00:13:12.642 "name": "BaseBdev1", 00:13:12.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.642 "is_configured": false, 00:13:12.642 "data_offset": 0, 00:13:12.642 "data_size": 0 00:13:12.642 }, 00:13:12.642 { 00:13:12.642 "name": "BaseBdev2", 00:13:12.642 "uuid": "8809be5f-eee4-47bc-915f-325b169a062e", 00:13:12.642 "is_configured": true, 00:13:12.642 "data_offset": 2048, 00:13:12.642 "data_size": 63488 00:13:12.642 }, 00:13:12.642 { 00:13:12.642 "name": "BaseBdev3", 00:13:12.642 "uuid": "6405b4cf-5044-4687-addd-3c0042a1fd39", 00:13:12.642 "is_configured": true, 00:13:12.642 "data_offset": 2048, 00:13:12.642 "data_size": 63488 00:13:12.642 } 00:13:12.642 ] 00:13:12.642 }' 00:13:12.642 18:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:12.642 18:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.579 18:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:13:13.579 [2024-07-15 18:27:58.956232] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:13.579 18:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:13.579 18:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:13.579 18:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:13.579 18:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:13.579 18:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:13.579 18:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:13.579 18:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:13.579 18:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:13.579 18:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:13.579 18:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:13.579 18:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.579 18:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:13.894 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:13.894 "name": "Existed_Raid", 00:13:13.894 "uuid": "b093a55f-d992-4ea4-b806-30bbd136bef6", 00:13:13.894 "strip_size_kb": 64, 00:13:13.894 "state": "configuring", 00:13:13.895 "raid_level": "raid0", 00:13:13.895 "superblock": true, 00:13:13.895 "num_base_bdevs": 3, 00:13:13.895 "num_base_bdevs_discovered": 1, 00:13:13.895 "num_base_bdevs_operational": 3, 00:13:13.895 "base_bdevs_list": [ 00:13:13.895 { 00:13:13.895 "name": "BaseBdev1", 00:13:13.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.895 "is_configured": false, 00:13:13.895 "data_offset": 0, 00:13:13.895 "data_size": 0 00:13:13.895 }, 00:13:13.895 { 00:13:13.895 "name": null, 00:13:13.895 "uuid": "8809be5f-eee4-47bc-915f-325b169a062e", 00:13:13.895 "is_configured": false, 00:13:13.895 "data_offset": 2048, 00:13:13.895 "data_size": 63488 00:13:13.895 }, 00:13:13.895 { 00:13:13.895 "name": "BaseBdev3", 00:13:13.895 "uuid": "6405b4cf-5044-4687-addd-3c0042a1fd39", 00:13:13.895 "is_configured": true, 00:13:13.895 "data_offset": 2048, 00:13:13.895 "data_size": 63488 00:13:13.895 } 00:13:13.895 ] 00:13:13.895 }' 00:13:13.895 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:13.895 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.830 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:14.830 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:15.088 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:13:15.088 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:15.346 [2024-07-15 18:28:00.860750] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:15.346 BaseBdev1 00:13:15.346 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:13:15.346 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:15.346 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:15.346 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:15.346 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:15.346 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:15.346 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:15.912 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:16.478 [ 00:13:16.479 { 00:13:16.479 "name": "BaseBdev1", 00:13:16.479 "aliases": [ 00:13:16.479 "d2445a39-8842-4613-af15-77c27ef8b5fa" 00:13:16.479 ], 00:13:16.479 "product_name": "Malloc disk", 00:13:16.479 "block_size": 512, 00:13:16.479 "num_blocks": 65536, 00:13:16.479 "uuid": "d2445a39-8842-4613-af15-77c27ef8b5fa", 00:13:16.479 "assigned_rate_limits": { 00:13:16.479 "rw_ios_per_sec": 0, 00:13:16.479 "rw_mbytes_per_sec": 0, 00:13:16.479 "r_mbytes_per_sec": 0, 00:13:16.479 "w_mbytes_per_sec": 0 00:13:16.479 }, 00:13:16.479 "claimed": true, 00:13:16.479 "claim_type": "exclusive_write", 00:13:16.479 "zoned": false, 00:13:16.479 "supported_io_types": { 00:13:16.479 "read": true, 00:13:16.479 "write": true, 00:13:16.479 "unmap": true, 00:13:16.479 "flush": true, 00:13:16.479 "reset": true, 00:13:16.479 "nvme_admin": false, 00:13:16.479 "nvme_io": false, 00:13:16.479 "nvme_io_md": false, 00:13:16.479 "write_zeroes": true, 00:13:16.479 "zcopy": true, 00:13:16.479 "get_zone_info": false, 00:13:16.479 "zone_management": false, 00:13:16.479 "zone_append": false, 00:13:16.479 "compare": false, 00:13:16.479 "compare_and_write": false, 00:13:16.479 "abort": true, 00:13:16.479 "seek_hole": false, 00:13:16.479 "seek_data": false, 00:13:16.479 "copy": true, 00:13:16.479 "nvme_iov_md": false 00:13:16.479 }, 00:13:16.479 "memory_domains": [ 00:13:16.479 { 00:13:16.479 "dma_device_id": "system", 00:13:16.479 "dma_device_type": 1 00:13:16.479 }, 00:13:16.479 { 00:13:16.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.479 "dma_device_type": 2 00:13:16.479 } 00:13:16.479 ], 00:13:16.479 "driver_specific": {} 00:13:16.479 } 00:13:16.479 ] 00:13:16.479 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:16.479 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:16.479 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:16.479 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:16.479 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:16.479 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:16.479 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:16.479 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:16.479 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:16.479 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:16.479 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:16.479 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:16.479 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.738 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:16.738 "name": "Existed_Raid", 00:13:16.738 "uuid": "b093a55f-d992-4ea4-b806-30bbd136bef6", 00:13:16.738 "strip_size_kb": 64, 00:13:16.738 "state": "configuring", 00:13:16.738 "raid_level": "raid0", 00:13:16.738 "superblock": true, 00:13:16.738 "num_base_bdevs": 3, 00:13:16.738 "num_base_bdevs_discovered": 2, 00:13:16.738 "num_base_bdevs_operational": 3, 00:13:16.738 "base_bdevs_list": [ 00:13:16.738 { 00:13:16.738 "name": "BaseBdev1", 00:13:16.738 "uuid": "d2445a39-8842-4613-af15-77c27ef8b5fa", 00:13:16.738 "is_configured": true, 00:13:16.738 "data_offset": 2048, 00:13:16.738 "data_size": 63488 00:13:16.738 }, 00:13:16.738 { 00:13:16.738 "name": null, 00:13:16.738 "uuid": "8809be5f-eee4-47bc-915f-325b169a062e", 00:13:16.738 "is_configured": false, 00:13:16.738 "data_offset": 2048, 00:13:16.738 "data_size": 63488 00:13:16.738 }, 00:13:16.738 { 00:13:16.738 "name": "BaseBdev3", 00:13:16.738 "uuid": "6405b4cf-5044-4687-addd-3c0042a1fd39", 00:13:16.738 "is_configured": true, 00:13:16.738 "data_offset": 2048, 00:13:16.738 "data_size": 63488 00:13:16.738 } 00:13:16.738 ] 00:13:16.738 }' 00:13:16.738 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:16.738 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.674 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:17.674 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:17.932 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:13:17.932 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:18.190 [2024-07-15 18:28:03.720668] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:18.448 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:18.448 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:18.448 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:18.448 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:18.448 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:18.448 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:18.448 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:18.448 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:18.448 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:18.448 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:18.448 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:18.448 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.707 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:18.707 "name": "Existed_Raid", 00:13:18.707 "uuid": "b093a55f-d992-4ea4-b806-30bbd136bef6", 00:13:18.707 "strip_size_kb": 64, 00:13:18.707 "state": "configuring", 00:13:18.707 "raid_level": "raid0", 00:13:18.707 "superblock": true, 00:13:18.707 "num_base_bdevs": 3, 00:13:18.707 "num_base_bdevs_discovered": 1, 00:13:18.707 "num_base_bdevs_operational": 3, 00:13:18.707 "base_bdevs_list": [ 00:13:18.707 { 00:13:18.707 "name": "BaseBdev1", 00:13:18.707 "uuid": "d2445a39-8842-4613-af15-77c27ef8b5fa", 00:13:18.707 "is_configured": true, 00:13:18.707 "data_offset": 2048, 00:13:18.707 "data_size": 63488 00:13:18.707 }, 00:13:18.707 { 00:13:18.707 "name": null, 00:13:18.707 "uuid": "8809be5f-eee4-47bc-915f-325b169a062e", 00:13:18.707 "is_configured": false, 00:13:18.707 "data_offset": 2048, 00:13:18.707 "data_size": 63488 00:13:18.707 }, 00:13:18.707 { 00:13:18.707 "name": null, 00:13:18.707 "uuid": "6405b4cf-5044-4687-addd-3c0042a1fd39", 00:13:18.707 "is_configured": false, 00:13:18.707 "data_offset": 2048, 00:13:18.707 "data_size": 63488 00:13:18.707 } 00:13:18.707 ] 00:13:18.707 }' 00:13:18.707 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:18.707 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.643 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:19.643 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:19.901 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:13:19.902 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:20.160 [2024-07-15 18:28:05.601737] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:20.160 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:20.160 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:20.160 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:20.160 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:20.160 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:20.160 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:20.160 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:20.160 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:20.160 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:20.160 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:20.160 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:20.160 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.727 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:20.727 "name": "Existed_Raid", 00:13:20.727 "uuid": "b093a55f-d992-4ea4-b806-30bbd136bef6", 00:13:20.727 "strip_size_kb": 64, 00:13:20.727 "state": "configuring", 00:13:20.727 "raid_level": "raid0", 00:13:20.727 "superblock": true, 00:13:20.727 "num_base_bdevs": 3, 00:13:20.727 "num_base_bdevs_discovered": 2, 00:13:20.727 "num_base_bdevs_operational": 3, 00:13:20.727 "base_bdevs_list": [ 00:13:20.727 { 00:13:20.727 "name": "BaseBdev1", 00:13:20.727 "uuid": "d2445a39-8842-4613-af15-77c27ef8b5fa", 00:13:20.727 "is_configured": true, 00:13:20.727 "data_offset": 2048, 00:13:20.727 "data_size": 63488 00:13:20.727 }, 00:13:20.727 { 00:13:20.727 "name": null, 00:13:20.727 "uuid": "8809be5f-eee4-47bc-915f-325b169a062e", 00:13:20.727 "is_configured": false, 00:13:20.727 "data_offset": 2048, 00:13:20.727 "data_size": 63488 00:13:20.727 }, 00:13:20.727 { 00:13:20.727 "name": "BaseBdev3", 00:13:20.727 "uuid": "6405b4cf-5044-4687-addd-3c0042a1fd39", 00:13:20.727 "is_configured": true, 00:13:20.727 "data_offset": 2048, 00:13:20.727 "data_size": 63488 00:13:20.727 } 00:13:20.727 ] 00:13:20.727 }' 00:13:20.727 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:20.727 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.295 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:21.295 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:21.553 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:13:21.553 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:21.810 [2024-07-15 18:28:07.262242] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:21.810 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:21.810 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:21.810 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:21.810 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:21.810 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:21.810 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:21.810 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:21.810 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:21.810 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:21.810 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:21.810 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:21.810 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.078 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:22.079 "name": "Existed_Raid", 00:13:22.079 "uuid": "b093a55f-d992-4ea4-b806-30bbd136bef6", 00:13:22.079 "strip_size_kb": 64, 00:13:22.079 "state": "configuring", 00:13:22.079 "raid_level": "raid0", 00:13:22.079 "superblock": true, 00:13:22.079 "num_base_bdevs": 3, 00:13:22.079 "num_base_bdevs_discovered": 1, 00:13:22.079 "num_base_bdevs_operational": 3, 00:13:22.079 "base_bdevs_list": [ 00:13:22.079 { 00:13:22.079 "name": null, 00:13:22.079 "uuid": "d2445a39-8842-4613-af15-77c27ef8b5fa", 00:13:22.079 "is_configured": false, 00:13:22.079 "data_offset": 2048, 00:13:22.079 "data_size": 63488 00:13:22.079 }, 00:13:22.079 { 00:13:22.079 "name": null, 00:13:22.079 "uuid": "8809be5f-eee4-47bc-915f-325b169a062e", 00:13:22.079 "is_configured": false, 00:13:22.079 "data_offset": 2048, 00:13:22.079 "data_size": 63488 00:13:22.079 }, 00:13:22.079 { 00:13:22.079 "name": "BaseBdev3", 00:13:22.079 "uuid": "6405b4cf-5044-4687-addd-3c0042a1fd39", 00:13:22.079 "is_configured": true, 00:13:22.079 "data_offset": 2048, 00:13:22.079 "data_size": 63488 00:13:22.079 } 00:13:22.079 ] 00:13:22.079 }' 00:13:22.079 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:22.079 18:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.650 18:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:22.650 18:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:22.908 18:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:13:22.908 18:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:23.167 [2024-07-15 18:28:08.656422] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:23.167 18:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:23.168 18:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:23.168 18:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:23.168 18:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:23.168 18:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:23.168 18:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:23.168 18:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:23.168 18:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:23.168 18:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:23.168 18:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:23.168 18:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:23.168 18:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.425 18:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:23.425 "name": "Existed_Raid", 00:13:23.425 "uuid": "b093a55f-d992-4ea4-b806-30bbd136bef6", 00:13:23.425 "strip_size_kb": 64, 00:13:23.425 "state": "configuring", 00:13:23.425 "raid_level": "raid0", 00:13:23.425 "superblock": true, 00:13:23.425 "num_base_bdevs": 3, 00:13:23.425 "num_base_bdevs_discovered": 2, 00:13:23.425 "num_base_bdevs_operational": 3, 00:13:23.425 "base_bdevs_list": [ 00:13:23.425 { 00:13:23.425 "name": null, 00:13:23.425 "uuid": "d2445a39-8842-4613-af15-77c27ef8b5fa", 00:13:23.425 "is_configured": false, 00:13:23.425 "data_offset": 2048, 00:13:23.425 "data_size": 63488 00:13:23.425 }, 00:13:23.425 { 00:13:23.425 "name": "BaseBdev2", 00:13:23.425 "uuid": "8809be5f-eee4-47bc-915f-325b169a062e", 00:13:23.425 "is_configured": true, 00:13:23.425 "data_offset": 2048, 00:13:23.425 "data_size": 63488 00:13:23.425 }, 00:13:23.425 { 00:13:23.425 "name": "BaseBdev3", 00:13:23.425 "uuid": "6405b4cf-5044-4687-addd-3c0042a1fd39", 00:13:23.425 "is_configured": true, 00:13:23.425 "data_offset": 2048, 00:13:23.425 "data_size": 63488 00:13:23.425 } 00:13:23.425 ] 00:13:23.425 }' 00:13:23.425 18:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:23.425 18:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.358 18:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:24.358 18:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:24.358 18:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:13:24.358 18:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:24.358 18:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:24.616 18:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u d2445a39-8842-4613-af15-77c27ef8b5fa 00:13:24.875 [2024-07-15 18:28:10.239834] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:24.875 [2024-07-15 18:28:10.239992] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x21b1c50 00:13:24.875 [2024-07-15 18:28:10.240005] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:24.875 [2024-07-15 18:28:10.240186] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x219d2d0 00:13:24.875 [2024-07-15 18:28:10.240310] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x21b1c50 00:13:24.875 [2024-07-15 18:28:10.240319] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x21b1c50 00:13:24.875 [2024-07-15 18:28:10.240415] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.875 NewBaseBdev 00:13:24.875 18:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:13:24.875 18:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:13:24.875 18:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:24.875 18:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:24.875 18:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:24.875 18:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:24.875 18:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:25.155 18:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:25.413 [ 00:13:25.413 { 00:13:25.413 "name": "NewBaseBdev", 00:13:25.413 "aliases": [ 00:13:25.413 "d2445a39-8842-4613-af15-77c27ef8b5fa" 00:13:25.413 ], 00:13:25.413 "product_name": "Malloc disk", 00:13:25.413 "block_size": 512, 00:13:25.413 "num_blocks": 65536, 00:13:25.413 "uuid": "d2445a39-8842-4613-af15-77c27ef8b5fa", 00:13:25.413 "assigned_rate_limits": { 00:13:25.413 "rw_ios_per_sec": 0, 00:13:25.413 "rw_mbytes_per_sec": 0, 00:13:25.413 "r_mbytes_per_sec": 0, 00:13:25.413 "w_mbytes_per_sec": 0 00:13:25.413 }, 00:13:25.413 "claimed": true, 00:13:25.413 "claim_type": "exclusive_write", 00:13:25.413 "zoned": false, 00:13:25.413 "supported_io_types": { 00:13:25.413 "read": true, 00:13:25.413 "write": true, 00:13:25.413 "unmap": true, 00:13:25.413 "flush": true, 00:13:25.413 "reset": true, 00:13:25.413 "nvme_admin": false, 00:13:25.413 "nvme_io": false, 00:13:25.413 "nvme_io_md": false, 00:13:25.413 "write_zeroes": true, 00:13:25.413 "zcopy": true, 00:13:25.413 "get_zone_info": false, 00:13:25.413 "zone_management": false, 00:13:25.413 "zone_append": false, 00:13:25.413 "compare": false, 00:13:25.413 "compare_and_write": false, 00:13:25.413 "abort": true, 00:13:25.413 "seek_hole": false, 00:13:25.413 "seek_data": false, 00:13:25.413 "copy": true, 00:13:25.413 "nvme_iov_md": false 00:13:25.413 }, 00:13:25.413 "memory_domains": [ 00:13:25.413 { 00:13:25.413 "dma_device_id": "system", 00:13:25.413 "dma_device_type": 1 00:13:25.413 }, 00:13:25.413 { 00:13:25.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.413 "dma_device_type": 2 00:13:25.413 } 00:13:25.413 ], 00:13:25.413 "driver_specific": {} 00:13:25.413 } 00:13:25.413 ] 00:13:25.413 18:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:25.413 18:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:13:25.413 18:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:25.413 18:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:25.413 18:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:25.413 18:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:25.413 18:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:25.413 18:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:25.413 18:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:25.413 18:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:25.413 18:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:25.413 18:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:25.413 18:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.671 18:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:25.671 "name": "Existed_Raid", 00:13:25.671 "uuid": "b093a55f-d992-4ea4-b806-30bbd136bef6", 00:13:25.671 "strip_size_kb": 64, 00:13:25.671 "state": "online", 00:13:25.671 "raid_level": "raid0", 00:13:25.671 "superblock": true, 00:13:25.671 "num_base_bdevs": 3, 00:13:25.671 "num_base_bdevs_discovered": 3, 00:13:25.671 "num_base_bdevs_operational": 3, 00:13:25.671 "base_bdevs_list": [ 00:13:25.671 { 00:13:25.671 "name": "NewBaseBdev", 00:13:25.671 "uuid": "d2445a39-8842-4613-af15-77c27ef8b5fa", 00:13:25.671 "is_configured": true, 00:13:25.671 "data_offset": 2048, 00:13:25.671 "data_size": 63488 00:13:25.671 }, 00:13:25.671 { 00:13:25.671 "name": "BaseBdev2", 00:13:25.671 "uuid": "8809be5f-eee4-47bc-915f-325b169a062e", 00:13:25.671 "is_configured": true, 00:13:25.671 "data_offset": 2048, 00:13:25.671 "data_size": 63488 00:13:25.671 }, 00:13:25.671 { 00:13:25.671 "name": "BaseBdev3", 00:13:25.671 "uuid": "6405b4cf-5044-4687-addd-3c0042a1fd39", 00:13:25.671 "is_configured": true, 00:13:25.671 "data_offset": 2048, 00:13:25.671 "data_size": 63488 00:13:25.671 } 00:13:25.671 ] 00:13:25.671 }' 00:13:25.671 18:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:25.671 18:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.236 18:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:13:26.236 18:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:26.236 18:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:26.236 18:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:26.236 18:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:26.236 18:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:26.236 18:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:26.236 18:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:26.494 [2024-07-15 18:28:11.880581] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:26.494 18:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:26.494 "name": "Existed_Raid", 00:13:26.494 "aliases": [ 00:13:26.494 "b093a55f-d992-4ea4-b806-30bbd136bef6" 00:13:26.494 ], 00:13:26.494 "product_name": "Raid Volume", 00:13:26.494 "block_size": 512, 00:13:26.494 "num_blocks": 190464, 00:13:26.494 "uuid": "b093a55f-d992-4ea4-b806-30bbd136bef6", 00:13:26.494 "assigned_rate_limits": { 00:13:26.494 "rw_ios_per_sec": 0, 00:13:26.494 "rw_mbytes_per_sec": 0, 00:13:26.494 "r_mbytes_per_sec": 0, 00:13:26.494 "w_mbytes_per_sec": 0 00:13:26.494 }, 00:13:26.494 "claimed": false, 00:13:26.494 "zoned": false, 00:13:26.494 "supported_io_types": { 00:13:26.494 "read": true, 00:13:26.494 "write": true, 00:13:26.494 "unmap": true, 00:13:26.494 "flush": true, 00:13:26.494 "reset": true, 00:13:26.494 "nvme_admin": false, 00:13:26.494 "nvme_io": false, 00:13:26.494 "nvme_io_md": false, 00:13:26.494 "write_zeroes": true, 00:13:26.494 "zcopy": false, 00:13:26.494 "get_zone_info": false, 00:13:26.494 "zone_management": false, 00:13:26.494 "zone_append": false, 00:13:26.494 "compare": false, 00:13:26.494 "compare_and_write": false, 00:13:26.494 "abort": false, 00:13:26.494 "seek_hole": false, 00:13:26.494 "seek_data": false, 00:13:26.494 "copy": false, 00:13:26.494 "nvme_iov_md": false 00:13:26.494 }, 00:13:26.494 "memory_domains": [ 00:13:26.494 { 00:13:26.494 "dma_device_id": "system", 00:13:26.494 "dma_device_type": 1 00:13:26.494 }, 00:13:26.494 { 00:13:26.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.494 "dma_device_type": 2 00:13:26.494 }, 00:13:26.494 { 00:13:26.494 "dma_device_id": "system", 00:13:26.494 "dma_device_type": 1 00:13:26.494 }, 00:13:26.494 { 00:13:26.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.494 "dma_device_type": 2 00:13:26.494 }, 00:13:26.494 { 00:13:26.494 "dma_device_id": "system", 00:13:26.494 "dma_device_type": 1 00:13:26.494 }, 00:13:26.494 { 00:13:26.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.494 "dma_device_type": 2 00:13:26.494 } 00:13:26.494 ], 00:13:26.494 "driver_specific": { 00:13:26.494 "raid": { 00:13:26.494 "uuid": "b093a55f-d992-4ea4-b806-30bbd136bef6", 00:13:26.494 "strip_size_kb": 64, 00:13:26.494 "state": "online", 00:13:26.494 "raid_level": "raid0", 00:13:26.494 "superblock": true, 00:13:26.494 "num_base_bdevs": 3, 00:13:26.494 "num_base_bdevs_discovered": 3, 00:13:26.494 "num_base_bdevs_operational": 3, 00:13:26.494 "base_bdevs_list": [ 00:13:26.494 { 00:13:26.494 "name": "NewBaseBdev", 00:13:26.494 "uuid": "d2445a39-8842-4613-af15-77c27ef8b5fa", 00:13:26.494 "is_configured": true, 00:13:26.494 "data_offset": 2048, 00:13:26.494 "data_size": 63488 00:13:26.494 }, 00:13:26.494 { 00:13:26.494 "name": "BaseBdev2", 00:13:26.494 "uuid": "8809be5f-eee4-47bc-915f-325b169a062e", 00:13:26.494 "is_configured": true, 00:13:26.494 "data_offset": 2048, 00:13:26.494 "data_size": 63488 00:13:26.494 }, 00:13:26.494 { 00:13:26.494 "name": "BaseBdev3", 00:13:26.494 "uuid": "6405b4cf-5044-4687-addd-3c0042a1fd39", 00:13:26.494 "is_configured": true, 00:13:26.494 "data_offset": 2048, 00:13:26.494 "data_size": 63488 00:13:26.494 } 00:13:26.494 ] 00:13:26.494 } 00:13:26.494 } 00:13:26.494 }' 00:13:26.494 18:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:26.494 18:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:13:26.494 BaseBdev2 00:13:26.494 BaseBdev3' 00:13:26.494 18:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:26.494 18:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:13:26.494 18:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:26.753 18:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:26.753 "name": "NewBaseBdev", 00:13:26.753 "aliases": [ 00:13:26.753 "d2445a39-8842-4613-af15-77c27ef8b5fa" 00:13:26.753 ], 00:13:26.753 "product_name": "Malloc disk", 00:13:26.753 "block_size": 512, 00:13:26.753 "num_blocks": 65536, 00:13:26.753 "uuid": "d2445a39-8842-4613-af15-77c27ef8b5fa", 00:13:26.753 "assigned_rate_limits": { 00:13:26.753 "rw_ios_per_sec": 0, 00:13:26.753 "rw_mbytes_per_sec": 0, 00:13:26.753 "r_mbytes_per_sec": 0, 00:13:26.753 "w_mbytes_per_sec": 0 00:13:26.753 }, 00:13:26.753 "claimed": true, 00:13:26.753 "claim_type": "exclusive_write", 00:13:26.753 "zoned": false, 00:13:26.753 "supported_io_types": { 00:13:26.753 "read": true, 00:13:26.753 "write": true, 00:13:26.753 "unmap": true, 00:13:26.753 "flush": true, 00:13:26.753 "reset": true, 00:13:26.753 "nvme_admin": false, 00:13:26.753 "nvme_io": false, 00:13:26.753 "nvme_io_md": false, 00:13:26.753 "write_zeroes": true, 00:13:26.753 "zcopy": true, 00:13:26.753 "get_zone_info": false, 00:13:26.753 "zone_management": false, 00:13:26.753 "zone_append": false, 00:13:26.753 "compare": false, 00:13:26.753 "compare_and_write": false, 00:13:26.753 "abort": true, 00:13:26.753 "seek_hole": false, 00:13:26.753 "seek_data": false, 00:13:26.753 "copy": true, 00:13:26.753 "nvme_iov_md": false 00:13:26.753 }, 00:13:26.753 "memory_domains": [ 00:13:26.753 { 00:13:26.753 "dma_device_id": "system", 00:13:26.753 "dma_device_type": 1 00:13:26.753 }, 00:13:26.753 { 00:13:26.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.753 "dma_device_type": 2 00:13:26.753 } 00:13:26.753 ], 00:13:26.753 "driver_specific": {} 00:13:26.753 }' 00:13:26.753 18:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:26.753 18:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:26.753 18:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:27.010 18:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:27.011 18:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:27.011 18:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:27.011 18:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:27.011 18:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:27.011 18:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:27.011 18:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:27.011 18:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:27.268 18:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:27.268 18:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:27.268 18:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:27.268 18:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:27.526 18:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:27.526 "name": "BaseBdev2", 00:13:27.526 "aliases": [ 00:13:27.526 "8809be5f-eee4-47bc-915f-325b169a062e" 00:13:27.526 ], 00:13:27.526 "product_name": "Malloc disk", 00:13:27.526 "block_size": 512, 00:13:27.526 "num_blocks": 65536, 00:13:27.526 "uuid": "8809be5f-eee4-47bc-915f-325b169a062e", 00:13:27.526 "assigned_rate_limits": { 00:13:27.526 "rw_ios_per_sec": 0, 00:13:27.526 "rw_mbytes_per_sec": 0, 00:13:27.526 "r_mbytes_per_sec": 0, 00:13:27.526 "w_mbytes_per_sec": 0 00:13:27.526 }, 00:13:27.526 "claimed": true, 00:13:27.526 "claim_type": "exclusive_write", 00:13:27.526 "zoned": false, 00:13:27.526 "supported_io_types": { 00:13:27.526 "read": true, 00:13:27.526 "write": true, 00:13:27.526 "unmap": true, 00:13:27.526 "flush": true, 00:13:27.526 "reset": true, 00:13:27.526 "nvme_admin": false, 00:13:27.526 "nvme_io": false, 00:13:27.526 "nvme_io_md": false, 00:13:27.526 "write_zeroes": true, 00:13:27.526 "zcopy": true, 00:13:27.526 "get_zone_info": false, 00:13:27.526 "zone_management": false, 00:13:27.526 "zone_append": false, 00:13:27.526 "compare": false, 00:13:27.526 "compare_and_write": false, 00:13:27.526 "abort": true, 00:13:27.526 "seek_hole": false, 00:13:27.526 "seek_data": false, 00:13:27.526 "copy": true, 00:13:27.526 "nvme_iov_md": false 00:13:27.526 }, 00:13:27.526 "memory_domains": [ 00:13:27.526 { 00:13:27.526 "dma_device_id": "system", 00:13:27.526 "dma_device_type": 1 00:13:27.526 }, 00:13:27.526 { 00:13:27.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.526 "dma_device_type": 2 00:13:27.526 } 00:13:27.526 ], 00:13:27.526 "driver_specific": {} 00:13:27.526 }' 00:13:27.526 18:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:27.526 18:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:27.526 18:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:27.526 18:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:27.526 18:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:27.526 18:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:27.526 18:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:27.526 18:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:27.784 18:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:27.784 18:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:27.784 18:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:27.784 18:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:27.784 18:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:27.784 18:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:27.784 18:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:28.041 18:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:28.041 "name": "BaseBdev3", 00:13:28.041 "aliases": [ 00:13:28.041 "6405b4cf-5044-4687-addd-3c0042a1fd39" 00:13:28.041 ], 00:13:28.041 "product_name": "Malloc disk", 00:13:28.041 "block_size": 512, 00:13:28.041 "num_blocks": 65536, 00:13:28.041 "uuid": "6405b4cf-5044-4687-addd-3c0042a1fd39", 00:13:28.041 "assigned_rate_limits": { 00:13:28.041 "rw_ios_per_sec": 0, 00:13:28.041 "rw_mbytes_per_sec": 0, 00:13:28.041 "r_mbytes_per_sec": 0, 00:13:28.041 "w_mbytes_per_sec": 0 00:13:28.041 }, 00:13:28.041 "claimed": true, 00:13:28.041 "claim_type": "exclusive_write", 00:13:28.041 "zoned": false, 00:13:28.041 "supported_io_types": { 00:13:28.041 "read": true, 00:13:28.041 "write": true, 00:13:28.041 "unmap": true, 00:13:28.041 "flush": true, 00:13:28.041 "reset": true, 00:13:28.041 "nvme_admin": false, 00:13:28.041 "nvme_io": false, 00:13:28.041 "nvme_io_md": false, 00:13:28.041 "write_zeroes": true, 00:13:28.041 "zcopy": true, 00:13:28.041 "get_zone_info": false, 00:13:28.041 "zone_management": false, 00:13:28.041 "zone_append": false, 00:13:28.041 "compare": false, 00:13:28.041 "compare_and_write": false, 00:13:28.041 "abort": true, 00:13:28.041 "seek_hole": false, 00:13:28.041 "seek_data": false, 00:13:28.041 "copy": true, 00:13:28.041 "nvme_iov_md": false 00:13:28.041 }, 00:13:28.041 "memory_domains": [ 00:13:28.041 { 00:13:28.041 "dma_device_id": "system", 00:13:28.041 "dma_device_type": 1 00:13:28.041 }, 00:13:28.041 { 00:13:28.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.041 "dma_device_type": 2 00:13:28.041 } 00:13:28.041 ], 00:13:28.041 "driver_specific": {} 00:13:28.041 }' 00:13:28.041 18:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:28.041 18:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:28.041 18:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:28.041 18:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:28.041 18:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:28.041 18:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:28.041 18:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:28.299 18:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:28.299 18:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:28.299 18:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:28.299 18:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:28.299 18:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:28.299 18:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:28.595 [2024-07-15 18:28:14.005989] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:28.595 [2024-07-15 18:28:14.006017] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:28.595 [2024-07-15 18:28:14.006064] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:28.595 [2024-07-15 18:28:14.006113] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:28.595 [2024-07-15 18:28:14.006122] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21b1c50 name Existed_Raid, state offline 00:13:28.595 18:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2787232 00:13:28.595 18:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2787232 ']' 00:13:28.595 18:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2787232 00:13:28.595 18:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:13:28.595 18:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:28.595 18:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2787232 00:13:28.595 18:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:28.595 18:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:28.595 18:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2787232' 00:13:28.595 killing process with pid 2787232 00:13:28.595 18:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2787232 00:13:28.595 [2024-07-15 18:28:14.075511] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:28.595 18:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2787232 00:13:28.595 [2024-07-15 18:28:14.101314] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:28.854 18:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:13:28.854 00:13:28.854 real 0m32.483s 00:13:28.854 user 1m1.171s 00:13:28.854 sys 0m4.284s 00:13:28.854 18:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:28.854 18:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.854 ************************************ 00:13:28.854 END TEST raid_state_function_test_sb 00:13:28.854 ************************************ 00:13:28.854 18:28:14 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:28.854 18:28:14 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:13:28.854 18:28:14 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:28.854 18:28:14 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:28.854 18:28:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:28.854 ************************************ 00:13:28.854 START TEST raid_superblock_test 00:13:28.854 ************************************ 00:13:28.854 18:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 3 00:13:28.854 18:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:13:28.854 18:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:13:28.854 18:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:13:28.854 18:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:13:28.854 18:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:13:28.854 18:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:13:28.854 18:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:13:28.854 18:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:13:28.854 18:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:13:28.854 18:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:13:28.854 18:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:13:28.854 18:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:13:28.854 18:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:13:28.854 18:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:13:28.854 18:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:13:28.854 18:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:13:28.854 18:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2792872 00:13:28.854 18:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2792872 /var/tmp/spdk-raid.sock 00:13:28.854 18:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:28.854 18:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2792872 ']' 00:13:28.854 18:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:28.854 18:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:28.854 18:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:28.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:28.854 18:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:28.854 18:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.854 [2024-07-15 18:28:14.402941] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:13:28.854 [2024-07-15 18:28:14.403010] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2792872 ] 00:13:29.113 [2024-07-15 18:28:14.502731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.113 [2024-07-15 18:28:14.592648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.113 [2024-07-15 18:28:14.659528] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.113 [2024-07-15 18:28:14.659563] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:30.048 18:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:30.048 18:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:13:30.048 18:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:13:30.048 18:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:30.048 18:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:13:30.048 18:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:13:30.048 18:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:30.048 18:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:30.048 18:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:30.048 18:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:30.048 18:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:30.306 malloc1 00:13:30.306 18:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:30.564 [2024-07-15 18:28:15.862422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:30.564 [2024-07-15 18:28:15.862469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.564 [2024-07-15 18:28:15.862484] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e0fe20 00:13:30.564 [2024-07-15 18:28:15.862494] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.564 [2024-07-15 18:28:15.864142] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.564 [2024-07-15 18:28:15.864169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:30.564 pt1 00:13:30.565 18:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:30.565 18:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:30.565 18:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:13:30.565 18:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:13:30.565 18:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:30.565 18:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:30.565 18:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:30.565 18:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:30.565 18:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:30.822 malloc2 00:13:30.822 18:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:31.080 [2024-07-15 18:28:16.388477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:31.080 [2024-07-15 18:28:16.388521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.080 [2024-07-15 18:28:16.388536] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fb9ed0 00:13:31.080 [2024-07-15 18:28:16.388545] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.080 [2024-07-15 18:28:16.390011] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.080 [2024-07-15 18:28:16.390038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:31.080 pt2 00:13:31.080 18:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:31.080 18:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:31.080 18:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:13:31.080 18:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:13:31.080 18:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:31.080 18:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:31.080 18:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:31.080 18:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:31.080 18:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:13:31.340 malloc3 00:13:31.340 18:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:31.598 [2024-07-15 18:28:16.918195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:31.598 [2024-07-15 18:28:16.918236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.598 [2024-07-15 18:28:16.918250] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fbda30 00:13:31.598 [2024-07-15 18:28:16.918259] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.598 [2024-07-15 18:28:16.919725] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.598 [2024-07-15 18:28:16.919751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:31.598 pt3 00:13:31.598 18:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:31.598 18:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:31.598 18:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:13:31.857 [2024-07-15 18:28:17.178916] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:31.857 [2024-07-15 18:28:17.180284] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:31.857 [2024-07-15 18:28:17.180342] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:31.857 [2024-07-15 18:28:17.180496] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1fbea40 00:13:31.857 [2024-07-15 18:28:17.180505] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:31.857 [2024-07-15 18:28:17.180713] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1fb9050 00:13:31.857 [2024-07-15 18:28:17.180860] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1fbea40 00:13:31.857 [2024-07-15 18:28:17.180868] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1fbea40 00:13:31.857 [2024-07-15 18:28:17.180976] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.857 18:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:31.857 18:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:31.857 18:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:31.857 18:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:31.857 18:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:31.857 18:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:31.857 18:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:31.857 18:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:31.857 18:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:31.857 18:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:31.857 18:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:31.857 18:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.115 18:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:32.115 "name": "raid_bdev1", 00:13:32.115 "uuid": "2ab7b665-5135-4b70-9bfb-46526db7ae0e", 00:13:32.115 "strip_size_kb": 64, 00:13:32.115 "state": "online", 00:13:32.115 "raid_level": "raid0", 00:13:32.115 "superblock": true, 00:13:32.115 "num_base_bdevs": 3, 00:13:32.115 "num_base_bdevs_discovered": 3, 00:13:32.115 "num_base_bdevs_operational": 3, 00:13:32.115 "base_bdevs_list": [ 00:13:32.115 { 00:13:32.115 "name": "pt1", 00:13:32.115 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:32.115 "is_configured": true, 00:13:32.115 "data_offset": 2048, 00:13:32.115 "data_size": 63488 00:13:32.115 }, 00:13:32.115 { 00:13:32.115 "name": "pt2", 00:13:32.115 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:32.115 "is_configured": true, 00:13:32.115 "data_offset": 2048, 00:13:32.115 "data_size": 63488 00:13:32.115 }, 00:13:32.115 { 00:13:32.115 "name": "pt3", 00:13:32.115 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:32.115 "is_configured": true, 00:13:32.115 "data_offset": 2048, 00:13:32.115 "data_size": 63488 00:13:32.115 } 00:13:32.115 ] 00:13:32.115 }' 00:13:32.115 18:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:32.115 18:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.681 18:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:13:32.681 18:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:32.681 18:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:32.681 18:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:32.681 18:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:32.681 18:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:32.681 18:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:32.681 18:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:32.940 [2024-07-15 18:28:18.238054] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:32.940 18:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:32.940 "name": "raid_bdev1", 00:13:32.940 "aliases": [ 00:13:32.940 "2ab7b665-5135-4b70-9bfb-46526db7ae0e" 00:13:32.940 ], 00:13:32.940 "product_name": "Raid Volume", 00:13:32.940 "block_size": 512, 00:13:32.940 "num_blocks": 190464, 00:13:32.940 "uuid": "2ab7b665-5135-4b70-9bfb-46526db7ae0e", 00:13:32.940 "assigned_rate_limits": { 00:13:32.940 "rw_ios_per_sec": 0, 00:13:32.940 "rw_mbytes_per_sec": 0, 00:13:32.940 "r_mbytes_per_sec": 0, 00:13:32.940 "w_mbytes_per_sec": 0 00:13:32.940 }, 00:13:32.940 "claimed": false, 00:13:32.940 "zoned": false, 00:13:32.940 "supported_io_types": { 00:13:32.940 "read": true, 00:13:32.940 "write": true, 00:13:32.940 "unmap": true, 00:13:32.940 "flush": true, 00:13:32.940 "reset": true, 00:13:32.940 "nvme_admin": false, 00:13:32.940 "nvme_io": false, 00:13:32.940 "nvme_io_md": false, 00:13:32.940 "write_zeroes": true, 00:13:32.940 "zcopy": false, 00:13:32.940 "get_zone_info": false, 00:13:32.940 "zone_management": false, 00:13:32.940 "zone_append": false, 00:13:32.940 "compare": false, 00:13:32.940 "compare_and_write": false, 00:13:32.940 "abort": false, 00:13:32.940 "seek_hole": false, 00:13:32.940 "seek_data": false, 00:13:32.940 "copy": false, 00:13:32.940 "nvme_iov_md": false 00:13:32.940 }, 00:13:32.940 "memory_domains": [ 00:13:32.940 { 00:13:32.940 "dma_device_id": "system", 00:13:32.940 "dma_device_type": 1 00:13:32.940 }, 00:13:32.940 { 00:13:32.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.940 "dma_device_type": 2 00:13:32.940 }, 00:13:32.940 { 00:13:32.940 "dma_device_id": "system", 00:13:32.940 "dma_device_type": 1 00:13:32.940 }, 00:13:32.940 { 00:13:32.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.940 "dma_device_type": 2 00:13:32.940 }, 00:13:32.940 { 00:13:32.940 "dma_device_id": "system", 00:13:32.940 "dma_device_type": 1 00:13:32.940 }, 00:13:32.940 { 00:13:32.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.940 "dma_device_type": 2 00:13:32.940 } 00:13:32.940 ], 00:13:32.940 "driver_specific": { 00:13:32.940 "raid": { 00:13:32.940 "uuid": "2ab7b665-5135-4b70-9bfb-46526db7ae0e", 00:13:32.940 "strip_size_kb": 64, 00:13:32.940 "state": "online", 00:13:32.940 "raid_level": "raid0", 00:13:32.940 "superblock": true, 00:13:32.940 "num_base_bdevs": 3, 00:13:32.940 "num_base_bdevs_discovered": 3, 00:13:32.940 "num_base_bdevs_operational": 3, 00:13:32.940 "base_bdevs_list": [ 00:13:32.940 { 00:13:32.940 "name": "pt1", 00:13:32.940 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:32.940 "is_configured": true, 00:13:32.940 "data_offset": 2048, 00:13:32.940 "data_size": 63488 00:13:32.940 }, 00:13:32.940 { 00:13:32.940 "name": "pt2", 00:13:32.940 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:32.940 "is_configured": true, 00:13:32.940 "data_offset": 2048, 00:13:32.940 "data_size": 63488 00:13:32.940 }, 00:13:32.940 { 00:13:32.940 "name": "pt3", 00:13:32.940 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:32.940 "is_configured": true, 00:13:32.940 "data_offset": 2048, 00:13:32.940 "data_size": 63488 00:13:32.940 } 00:13:32.940 ] 00:13:32.940 } 00:13:32.940 } 00:13:32.940 }' 00:13:32.940 18:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:32.940 18:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:32.940 pt2 00:13:32.940 pt3' 00:13:32.940 18:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:32.940 18:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:32.940 18:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:33.199 18:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:33.199 "name": "pt1", 00:13:33.199 "aliases": [ 00:13:33.199 "00000000-0000-0000-0000-000000000001" 00:13:33.199 ], 00:13:33.199 "product_name": "passthru", 00:13:33.199 "block_size": 512, 00:13:33.199 "num_blocks": 65536, 00:13:33.199 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:33.199 "assigned_rate_limits": { 00:13:33.199 "rw_ios_per_sec": 0, 00:13:33.199 "rw_mbytes_per_sec": 0, 00:13:33.199 "r_mbytes_per_sec": 0, 00:13:33.199 "w_mbytes_per_sec": 0 00:13:33.199 }, 00:13:33.199 "claimed": true, 00:13:33.199 "claim_type": "exclusive_write", 00:13:33.199 "zoned": false, 00:13:33.199 "supported_io_types": { 00:13:33.199 "read": true, 00:13:33.199 "write": true, 00:13:33.199 "unmap": true, 00:13:33.199 "flush": true, 00:13:33.199 "reset": true, 00:13:33.199 "nvme_admin": false, 00:13:33.199 "nvme_io": false, 00:13:33.199 "nvme_io_md": false, 00:13:33.199 "write_zeroes": true, 00:13:33.199 "zcopy": true, 00:13:33.199 "get_zone_info": false, 00:13:33.199 "zone_management": false, 00:13:33.199 "zone_append": false, 00:13:33.199 "compare": false, 00:13:33.199 "compare_and_write": false, 00:13:33.199 "abort": true, 00:13:33.199 "seek_hole": false, 00:13:33.199 "seek_data": false, 00:13:33.199 "copy": true, 00:13:33.199 "nvme_iov_md": false 00:13:33.199 }, 00:13:33.199 "memory_domains": [ 00:13:33.199 { 00:13:33.200 "dma_device_id": "system", 00:13:33.200 "dma_device_type": 1 00:13:33.200 }, 00:13:33.200 { 00:13:33.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.200 "dma_device_type": 2 00:13:33.200 } 00:13:33.200 ], 00:13:33.200 "driver_specific": { 00:13:33.200 "passthru": { 00:13:33.200 "name": "pt1", 00:13:33.200 "base_bdev_name": "malloc1" 00:13:33.200 } 00:13:33.200 } 00:13:33.200 }' 00:13:33.200 18:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:33.200 18:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:33.200 18:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:33.200 18:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:33.200 18:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:33.200 18:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:33.200 18:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:33.458 18:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:33.458 18:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:33.458 18:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:33.458 18:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:33.458 18:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:33.458 18:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:33.458 18:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:33.458 18:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:33.717 18:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:33.717 "name": "pt2", 00:13:33.717 "aliases": [ 00:13:33.717 "00000000-0000-0000-0000-000000000002" 00:13:33.717 ], 00:13:33.717 "product_name": "passthru", 00:13:33.717 "block_size": 512, 00:13:33.717 "num_blocks": 65536, 00:13:33.717 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:33.717 "assigned_rate_limits": { 00:13:33.717 "rw_ios_per_sec": 0, 00:13:33.717 "rw_mbytes_per_sec": 0, 00:13:33.717 "r_mbytes_per_sec": 0, 00:13:33.717 "w_mbytes_per_sec": 0 00:13:33.718 }, 00:13:33.718 "claimed": true, 00:13:33.718 "claim_type": "exclusive_write", 00:13:33.718 "zoned": false, 00:13:33.718 "supported_io_types": { 00:13:33.718 "read": true, 00:13:33.718 "write": true, 00:13:33.718 "unmap": true, 00:13:33.718 "flush": true, 00:13:33.718 "reset": true, 00:13:33.718 "nvme_admin": false, 00:13:33.718 "nvme_io": false, 00:13:33.718 "nvme_io_md": false, 00:13:33.718 "write_zeroes": true, 00:13:33.718 "zcopy": true, 00:13:33.718 "get_zone_info": false, 00:13:33.718 "zone_management": false, 00:13:33.718 "zone_append": false, 00:13:33.718 "compare": false, 00:13:33.718 "compare_and_write": false, 00:13:33.718 "abort": true, 00:13:33.718 "seek_hole": false, 00:13:33.718 "seek_data": false, 00:13:33.718 "copy": true, 00:13:33.718 "nvme_iov_md": false 00:13:33.718 }, 00:13:33.718 "memory_domains": [ 00:13:33.718 { 00:13:33.718 "dma_device_id": "system", 00:13:33.718 "dma_device_type": 1 00:13:33.718 }, 00:13:33.718 { 00:13:33.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.718 "dma_device_type": 2 00:13:33.718 } 00:13:33.718 ], 00:13:33.718 "driver_specific": { 00:13:33.718 "passthru": { 00:13:33.718 "name": "pt2", 00:13:33.718 "base_bdev_name": "malloc2" 00:13:33.718 } 00:13:33.718 } 00:13:33.718 }' 00:13:33.718 18:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:33.718 18:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:33.976 18:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:33.976 18:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:33.976 18:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:33.976 18:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:33.976 18:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:33.976 18:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:33.976 18:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:33.976 18:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:33.976 18:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:34.234 18:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:34.234 18:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:34.234 18:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:13:34.234 18:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:34.493 18:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:34.493 "name": "pt3", 00:13:34.493 "aliases": [ 00:13:34.493 "00000000-0000-0000-0000-000000000003" 00:13:34.493 ], 00:13:34.494 "product_name": "passthru", 00:13:34.494 "block_size": 512, 00:13:34.494 "num_blocks": 65536, 00:13:34.494 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:34.494 "assigned_rate_limits": { 00:13:34.494 "rw_ios_per_sec": 0, 00:13:34.494 "rw_mbytes_per_sec": 0, 00:13:34.494 "r_mbytes_per_sec": 0, 00:13:34.494 "w_mbytes_per_sec": 0 00:13:34.494 }, 00:13:34.494 "claimed": true, 00:13:34.494 "claim_type": "exclusive_write", 00:13:34.494 "zoned": false, 00:13:34.494 "supported_io_types": { 00:13:34.494 "read": true, 00:13:34.494 "write": true, 00:13:34.494 "unmap": true, 00:13:34.494 "flush": true, 00:13:34.494 "reset": true, 00:13:34.494 "nvme_admin": false, 00:13:34.494 "nvme_io": false, 00:13:34.494 "nvme_io_md": false, 00:13:34.494 "write_zeroes": true, 00:13:34.494 "zcopy": true, 00:13:34.494 "get_zone_info": false, 00:13:34.494 "zone_management": false, 00:13:34.494 "zone_append": false, 00:13:34.494 "compare": false, 00:13:34.494 "compare_and_write": false, 00:13:34.494 "abort": true, 00:13:34.494 "seek_hole": false, 00:13:34.494 "seek_data": false, 00:13:34.494 "copy": true, 00:13:34.494 "nvme_iov_md": false 00:13:34.494 }, 00:13:34.494 "memory_domains": [ 00:13:34.494 { 00:13:34.494 "dma_device_id": "system", 00:13:34.494 "dma_device_type": 1 00:13:34.494 }, 00:13:34.494 { 00:13:34.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.494 "dma_device_type": 2 00:13:34.494 } 00:13:34.494 ], 00:13:34.494 "driver_specific": { 00:13:34.494 "passthru": { 00:13:34.494 "name": "pt3", 00:13:34.494 "base_bdev_name": "malloc3" 00:13:34.494 } 00:13:34.494 } 00:13:34.494 }' 00:13:34.494 18:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:34.494 18:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:34.494 18:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:34.494 18:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:34.494 18:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:34.494 18:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:34.494 18:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:34.494 18:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:34.752 18:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:34.752 18:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:34.752 18:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:34.752 18:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:34.752 18:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:34.752 18:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:13:35.011 [2024-07-15 18:28:20.411917] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:35.011 18:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=2ab7b665-5135-4b70-9bfb-46526db7ae0e 00:13:35.011 18:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 2ab7b665-5135-4b70-9bfb-46526db7ae0e ']' 00:13:35.011 18:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:35.269 [2024-07-15 18:28:20.676315] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:35.269 [2024-07-15 18:28:20.676333] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:35.269 [2024-07-15 18:28:20.676380] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:35.269 [2024-07-15 18:28:20.676433] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:35.269 [2024-07-15 18:28:20.676442] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fbea40 name raid_bdev1, state offline 00:13:35.269 18:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:35.269 18:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:13:35.528 18:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:13:35.528 18:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:13:35.528 18:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:35.528 18:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:35.787 18:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:35.787 18:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:36.046 18:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:36.046 18:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:13:36.305 18:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:36.305 18:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:36.564 18:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:13:36.564 18:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:13:36.564 18:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:13:36.564 18:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:13:36.564 18:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:36.564 18:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:36.564 18:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:36.564 18:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:36.564 18:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:36.564 18:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:36.564 18:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:36.564 18:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:13:36.564 18:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:13:36.823 [2024-07-15 18:28:22.212374] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:36.823 [2024-07-15 18:28:22.213790] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:36.823 [2024-07-15 18:28:22.213835] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:36.823 [2024-07-15 18:28:22.213879] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:36.823 [2024-07-15 18:28:22.213914] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:36.823 [2024-07-15 18:28:22.213934] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:36.823 [2024-07-15 18:28:22.213956] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:36.823 [2024-07-15 18:28:22.213970] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fba100 name raid_bdev1, state configuring 00:13:36.823 request: 00:13:36.823 { 00:13:36.823 "name": "raid_bdev1", 00:13:36.823 "raid_level": "raid0", 00:13:36.823 "base_bdevs": [ 00:13:36.823 "malloc1", 00:13:36.823 "malloc2", 00:13:36.823 "malloc3" 00:13:36.823 ], 00:13:36.823 "strip_size_kb": 64, 00:13:36.823 "superblock": false, 00:13:36.823 "method": "bdev_raid_create", 00:13:36.823 "req_id": 1 00:13:36.823 } 00:13:36.823 Got JSON-RPC error response 00:13:36.823 response: 00:13:36.823 { 00:13:36.823 "code": -17, 00:13:36.823 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:36.823 } 00:13:36.823 18:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:13:36.823 18:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:36.823 18:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:36.823 18:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:36.823 18:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:36.823 18:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:13:37.082 18:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:13:37.082 18:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:13:37.082 18:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:37.342 [2024-07-15 18:28:22.645475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:37.342 [2024-07-15 18:28:22.645525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.342 [2024-07-15 18:28:22.645540] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fba570 00:13:37.342 [2024-07-15 18:28:22.645550] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.342 [2024-07-15 18:28:22.647234] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.342 [2024-07-15 18:28:22.647262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:37.342 [2024-07-15 18:28:22.647327] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:37.342 [2024-07-15 18:28:22.647352] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:37.342 pt1 00:13:37.342 18:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:13:37.342 18:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:37.342 18:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:37.342 18:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:37.342 18:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:37.342 18:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:37.342 18:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:37.342 18:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:37.342 18:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:37.342 18:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:37.342 18:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:37.342 18:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.602 18:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:37.602 "name": "raid_bdev1", 00:13:37.602 "uuid": "2ab7b665-5135-4b70-9bfb-46526db7ae0e", 00:13:37.602 "strip_size_kb": 64, 00:13:37.602 "state": "configuring", 00:13:37.602 "raid_level": "raid0", 00:13:37.602 "superblock": true, 00:13:37.602 "num_base_bdevs": 3, 00:13:37.602 "num_base_bdevs_discovered": 1, 00:13:37.602 "num_base_bdevs_operational": 3, 00:13:37.602 "base_bdevs_list": [ 00:13:37.602 { 00:13:37.602 "name": "pt1", 00:13:37.602 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:37.602 "is_configured": true, 00:13:37.602 "data_offset": 2048, 00:13:37.602 "data_size": 63488 00:13:37.602 }, 00:13:37.602 { 00:13:37.602 "name": null, 00:13:37.602 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:37.602 "is_configured": false, 00:13:37.602 "data_offset": 2048, 00:13:37.602 "data_size": 63488 00:13:37.602 }, 00:13:37.602 { 00:13:37.602 "name": null, 00:13:37.602 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:37.602 "is_configured": false, 00:13:37.602 "data_offset": 2048, 00:13:37.602 "data_size": 63488 00:13:37.602 } 00:13:37.602 ] 00:13:37.602 }' 00:13:37.602 18:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:37.602 18:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.180 18:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:13:38.180 18:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:38.438 [2024-07-15 18:28:23.764517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:38.439 [2024-07-15 18:28:23.764569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.439 [2024-07-15 18:28:23.764588] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e10a40 00:13:38.439 [2024-07-15 18:28:23.764597] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.439 [2024-07-15 18:28:23.764947] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.439 [2024-07-15 18:28:23.764972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:38.439 [2024-07-15 18:28:23.765046] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:38.439 [2024-07-15 18:28:23.765069] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:38.439 pt2 00:13:38.439 18:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:38.697 [2024-07-15 18:28:24.025233] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:38.697 18:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:13:38.697 18:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:38.697 18:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:38.697 18:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:38.697 18:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:38.697 18:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:38.697 18:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:38.697 18:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:38.697 18:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:38.697 18:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:38.697 18:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:38.697 18:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.956 18:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:38.956 "name": "raid_bdev1", 00:13:38.956 "uuid": "2ab7b665-5135-4b70-9bfb-46526db7ae0e", 00:13:38.956 "strip_size_kb": 64, 00:13:38.956 "state": "configuring", 00:13:38.956 "raid_level": "raid0", 00:13:38.956 "superblock": true, 00:13:38.956 "num_base_bdevs": 3, 00:13:38.956 "num_base_bdevs_discovered": 1, 00:13:38.956 "num_base_bdevs_operational": 3, 00:13:38.956 "base_bdevs_list": [ 00:13:38.956 { 00:13:38.956 "name": "pt1", 00:13:38.956 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:38.956 "is_configured": true, 00:13:38.956 "data_offset": 2048, 00:13:38.956 "data_size": 63488 00:13:38.956 }, 00:13:38.956 { 00:13:38.956 "name": null, 00:13:38.956 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:38.956 "is_configured": false, 00:13:38.956 "data_offset": 2048, 00:13:38.956 "data_size": 63488 00:13:38.956 }, 00:13:38.956 { 00:13:38.956 "name": null, 00:13:38.956 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:38.956 "is_configured": false, 00:13:38.956 "data_offset": 2048, 00:13:38.956 "data_size": 63488 00:13:38.956 } 00:13:38.956 ] 00:13:38.956 }' 00:13:38.956 18:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:38.956 18:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.522 18:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:13:39.522 18:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:39.522 18:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:39.781 [2024-07-15 18:28:25.172314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:39.781 [2024-07-15 18:28:25.172364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.781 [2024-07-15 18:28:25.172380] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e101f0 00:13:39.781 [2024-07-15 18:28:25.172390] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.781 [2024-07-15 18:28:25.172738] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.781 [2024-07-15 18:28:25.172753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:39.781 [2024-07-15 18:28:25.172813] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:39.782 [2024-07-15 18:28:25.172830] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:39.782 pt2 00:13:39.782 18:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:13:39.782 18:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:39.782 18:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:40.040 [2024-07-15 18:28:25.433180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:40.040 [2024-07-15 18:28:25.433225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.040 [2024-07-15 18:28:25.433240] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fbc6d0 00:13:40.040 [2024-07-15 18:28:25.433249] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.040 [2024-07-15 18:28:25.433568] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.040 [2024-07-15 18:28:25.433582] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:40.040 [2024-07-15 18:28:25.433637] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:40.040 [2024-07-15 18:28:25.433654] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:40.040 [2024-07-15 18:28:25.433764] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1fc10c0 00:13:40.040 [2024-07-15 18:28:25.433773] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:40.040 [2024-07-15 18:28:25.433939] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e8b310 00:13:40.040 [2024-07-15 18:28:25.434081] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1fc10c0 00:13:40.040 [2024-07-15 18:28:25.434090] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1fc10c0 00:13:40.040 [2024-07-15 18:28:25.434189] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.040 pt3 00:13:40.040 18:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:13:40.040 18:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:40.040 18:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:40.040 18:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:40.040 18:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:40.040 18:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:40.040 18:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:40.040 18:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:40.040 18:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:40.040 18:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:40.040 18:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:40.040 18:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:40.040 18:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.040 18:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:40.299 18:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:40.299 "name": "raid_bdev1", 00:13:40.299 "uuid": "2ab7b665-5135-4b70-9bfb-46526db7ae0e", 00:13:40.299 "strip_size_kb": 64, 00:13:40.299 "state": "online", 00:13:40.299 "raid_level": "raid0", 00:13:40.299 "superblock": true, 00:13:40.299 "num_base_bdevs": 3, 00:13:40.299 "num_base_bdevs_discovered": 3, 00:13:40.299 "num_base_bdevs_operational": 3, 00:13:40.299 "base_bdevs_list": [ 00:13:40.299 { 00:13:40.299 "name": "pt1", 00:13:40.299 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:40.299 "is_configured": true, 00:13:40.299 "data_offset": 2048, 00:13:40.299 "data_size": 63488 00:13:40.299 }, 00:13:40.299 { 00:13:40.299 "name": "pt2", 00:13:40.299 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:40.299 "is_configured": true, 00:13:40.299 "data_offset": 2048, 00:13:40.299 "data_size": 63488 00:13:40.299 }, 00:13:40.299 { 00:13:40.299 "name": "pt3", 00:13:40.299 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:40.299 "is_configured": true, 00:13:40.299 "data_offset": 2048, 00:13:40.299 "data_size": 63488 00:13:40.299 } 00:13:40.299 ] 00:13:40.299 }' 00:13:40.299 18:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:40.299 18:28:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.866 18:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:13:40.866 18:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:40.866 18:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:40.866 18:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:40.866 18:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:40.866 18:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:40.866 18:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:40.866 18:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:41.125 [2024-07-15 18:28:26.576402] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:41.125 18:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:41.125 "name": "raid_bdev1", 00:13:41.125 "aliases": [ 00:13:41.125 "2ab7b665-5135-4b70-9bfb-46526db7ae0e" 00:13:41.125 ], 00:13:41.125 "product_name": "Raid Volume", 00:13:41.125 "block_size": 512, 00:13:41.125 "num_blocks": 190464, 00:13:41.125 "uuid": "2ab7b665-5135-4b70-9bfb-46526db7ae0e", 00:13:41.125 "assigned_rate_limits": { 00:13:41.125 "rw_ios_per_sec": 0, 00:13:41.125 "rw_mbytes_per_sec": 0, 00:13:41.125 "r_mbytes_per_sec": 0, 00:13:41.125 "w_mbytes_per_sec": 0 00:13:41.125 }, 00:13:41.125 "claimed": false, 00:13:41.125 "zoned": false, 00:13:41.125 "supported_io_types": { 00:13:41.125 "read": true, 00:13:41.125 "write": true, 00:13:41.125 "unmap": true, 00:13:41.125 "flush": true, 00:13:41.125 "reset": true, 00:13:41.125 "nvme_admin": false, 00:13:41.125 "nvme_io": false, 00:13:41.125 "nvme_io_md": false, 00:13:41.125 "write_zeroes": true, 00:13:41.125 "zcopy": false, 00:13:41.125 "get_zone_info": false, 00:13:41.125 "zone_management": false, 00:13:41.125 "zone_append": false, 00:13:41.125 "compare": false, 00:13:41.125 "compare_and_write": false, 00:13:41.125 "abort": false, 00:13:41.125 "seek_hole": false, 00:13:41.125 "seek_data": false, 00:13:41.125 "copy": false, 00:13:41.125 "nvme_iov_md": false 00:13:41.125 }, 00:13:41.125 "memory_domains": [ 00:13:41.125 { 00:13:41.125 "dma_device_id": "system", 00:13:41.125 "dma_device_type": 1 00:13:41.125 }, 00:13:41.125 { 00:13:41.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.125 "dma_device_type": 2 00:13:41.125 }, 00:13:41.125 { 00:13:41.125 "dma_device_id": "system", 00:13:41.125 "dma_device_type": 1 00:13:41.125 }, 00:13:41.125 { 00:13:41.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.125 "dma_device_type": 2 00:13:41.125 }, 00:13:41.125 { 00:13:41.125 "dma_device_id": "system", 00:13:41.125 "dma_device_type": 1 00:13:41.125 }, 00:13:41.125 { 00:13:41.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.125 "dma_device_type": 2 00:13:41.125 } 00:13:41.125 ], 00:13:41.125 "driver_specific": { 00:13:41.125 "raid": { 00:13:41.125 "uuid": "2ab7b665-5135-4b70-9bfb-46526db7ae0e", 00:13:41.125 "strip_size_kb": 64, 00:13:41.125 "state": "online", 00:13:41.125 "raid_level": "raid0", 00:13:41.125 "superblock": true, 00:13:41.125 "num_base_bdevs": 3, 00:13:41.125 "num_base_bdevs_discovered": 3, 00:13:41.125 "num_base_bdevs_operational": 3, 00:13:41.125 "base_bdevs_list": [ 00:13:41.125 { 00:13:41.125 "name": "pt1", 00:13:41.125 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:41.125 "is_configured": true, 00:13:41.125 "data_offset": 2048, 00:13:41.126 "data_size": 63488 00:13:41.126 }, 00:13:41.126 { 00:13:41.126 "name": "pt2", 00:13:41.126 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:41.126 "is_configured": true, 00:13:41.126 "data_offset": 2048, 00:13:41.126 "data_size": 63488 00:13:41.126 }, 00:13:41.126 { 00:13:41.126 "name": "pt3", 00:13:41.126 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:41.126 "is_configured": true, 00:13:41.126 "data_offset": 2048, 00:13:41.126 "data_size": 63488 00:13:41.126 } 00:13:41.126 ] 00:13:41.126 } 00:13:41.126 } 00:13:41.126 }' 00:13:41.126 18:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:41.126 18:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:41.126 pt2 00:13:41.126 pt3' 00:13:41.126 18:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:41.126 18:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:41.126 18:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:41.385 18:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:41.385 "name": "pt1", 00:13:41.385 "aliases": [ 00:13:41.385 "00000000-0000-0000-0000-000000000001" 00:13:41.385 ], 00:13:41.385 "product_name": "passthru", 00:13:41.385 "block_size": 512, 00:13:41.385 "num_blocks": 65536, 00:13:41.385 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:41.385 "assigned_rate_limits": { 00:13:41.385 "rw_ios_per_sec": 0, 00:13:41.385 "rw_mbytes_per_sec": 0, 00:13:41.385 "r_mbytes_per_sec": 0, 00:13:41.385 "w_mbytes_per_sec": 0 00:13:41.385 }, 00:13:41.385 "claimed": true, 00:13:41.385 "claim_type": "exclusive_write", 00:13:41.385 "zoned": false, 00:13:41.385 "supported_io_types": { 00:13:41.385 "read": true, 00:13:41.385 "write": true, 00:13:41.385 "unmap": true, 00:13:41.385 "flush": true, 00:13:41.385 "reset": true, 00:13:41.385 "nvme_admin": false, 00:13:41.385 "nvme_io": false, 00:13:41.385 "nvme_io_md": false, 00:13:41.385 "write_zeroes": true, 00:13:41.385 "zcopy": true, 00:13:41.385 "get_zone_info": false, 00:13:41.385 "zone_management": false, 00:13:41.385 "zone_append": false, 00:13:41.385 "compare": false, 00:13:41.385 "compare_and_write": false, 00:13:41.385 "abort": true, 00:13:41.385 "seek_hole": false, 00:13:41.385 "seek_data": false, 00:13:41.385 "copy": true, 00:13:41.385 "nvme_iov_md": false 00:13:41.385 }, 00:13:41.385 "memory_domains": [ 00:13:41.385 { 00:13:41.385 "dma_device_id": "system", 00:13:41.385 "dma_device_type": 1 00:13:41.385 }, 00:13:41.385 { 00:13:41.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.385 "dma_device_type": 2 00:13:41.385 } 00:13:41.385 ], 00:13:41.385 "driver_specific": { 00:13:41.385 "passthru": { 00:13:41.385 "name": "pt1", 00:13:41.385 "base_bdev_name": "malloc1" 00:13:41.385 } 00:13:41.385 } 00:13:41.385 }' 00:13:41.385 18:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:41.644 18:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:41.644 18:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:41.644 18:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:41.644 18:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:41.644 18:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:41.644 18:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:41.644 18:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:41.644 18:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:41.644 18:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:41.902 18:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:41.902 18:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:41.902 18:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:41.902 18:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:41.902 18:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:42.159 18:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:42.159 "name": "pt2", 00:13:42.159 "aliases": [ 00:13:42.159 "00000000-0000-0000-0000-000000000002" 00:13:42.159 ], 00:13:42.159 "product_name": "passthru", 00:13:42.159 "block_size": 512, 00:13:42.159 "num_blocks": 65536, 00:13:42.159 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:42.159 "assigned_rate_limits": { 00:13:42.159 "rw_ios_per_sec": 0, 00:13:42.159 "rw_mbytes_per_sec": 0, 00:13:42.159 "r_mbytes_per_sec": 0, 00:13:42.159 "w_mbytes_per_sec": 0 00:13:42.159 }, 00:13:42.159 "claimed": true, 00:13:42.159 "claim_type": "exclusive_write", 00:13:42.159 "zoned": false, 00:13:42.159 "supported_io_types": { 00:13:42.159 "read": true, 00:13:42.159 "write": true, 00:13:42.159 "unmap": true, 00:13:42.159 "flush": true, 00:13:42.159 "reset": true, 00:13:42.159 "nvme_admin": false, 00:13:42.159 "nvme_io": false, 00:13:42.159 "nvme_io_md": false, 00:13:42.159 "write_zeroes": true, 00:13:42.159 "zcopy": true, 00:13:42.159 "get_zone_info": false, 00:13:42.159 "zone_management": false, 00:13:42.159 "zone_append": false, 00:13:42.159 "compare": false, 00:13:42.159 "compare_and_write": false, 00:13:42.159 "abort": true, 00:13:42.159 "seek_hole": false, 00:13:42.159 "seek_data": false, 00:13:42.159 "copy": true, 00:13:42.159 "nvme_iov_md": false 00:13:42.159 }, 00:13:42.159 "memory_domains": [ 00:13:42.159 { 00:13:42.159 "dma_device_id": "system", 00:13:42.159 "dma_device_type": 1 00:13:42.159 }, 00:13:42.159 { 00:13:42.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.159 "dma_device_type": 2 00:13:42.159 } 00:13:42.159 ], 00:13:42.159 "driver_specific": { 00:13:42.159 "passthru": { 00:13:42.159 "name": "pt2", 00:13:42.159 "base_bdev_name": "malloc2" 00:13:42.159 } 00:13:42.159 } 00:13:42.159 }' 00:13:42.159 18:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:42.159 18:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:42.160 18:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:42.160 18:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:42.160 18:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:42.160 18:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:42.160 18:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:42.446 18:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:42.446 18:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:42.446 18:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:42.446 18:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:42.446 18:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:42.446 18:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:42.446 18:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:13:42.446 18:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:42.714 18:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:42.714 "name": "pt3", 00:13:42.714 "aliases": [ 00:13:42.714 "00000000-0000-0000-0000-000000000003" 00:13:42.714 ], 00:13:42.714 "product_name": "passthru", 00:13:42.714 "block_size": 512, 00:13:42.714 "num_blocks": 65536, 00:13:42.714 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:42.714 "assigned_rate_limits": { 00:13:42.714 "rw_ios_per_sec": 0, 00:13:42.714 "rw_mbytes_per_sec": 0, 00:13:42.714 "r_mbytes_per_sec": 0, 00:13:42.714 "w_mbytes_per_sec": 0 00:13:42.714 }, 00:13:42.714 "claimed": true, 00:13:42.714 "claim_type": "exclusive_write", 00:13:42.714 "zoned": false, 00:13:42.714 "supported_io_types": { 00:13:42.714 "read": true, 00:13:42.714 "write": true, 00:13:42.714 "unmap": true, 00:13:42.714 "flush": true, 00:13:42.714 "reset": true, 00:13:42.714 "nvme_admin": false, 00:13:42.714 "nvme_io": false, 00:13:42.714 "nvme_io_md": false, 00:13:42.714 "write_zeroes": true, 00:13:42.714 "zcopy": true, 00:13:42.714 "get_zone_info": false, 00:13:42.714 "zone_management": false, 00:13:42.714 "zone_append": false, 00:13:42.714 "compare": false, 00:13:42.714 "compare_and_write": false, 00:13:42.714 "abort": true, 00:13:42.714 "seek_hole": false, 00:13:42.714 "seek_data": false, 00:13:42.714 "copy": true, 00:13:42.714 "nvme_iov_md": false 00:13:42.714 }, 00:13:42.714 "memory_domains": [ 00:13:42.714 { 00:13:42.714 "dma_device_id": "system", 00:13:42.714 "dma_device_type": 1 00:13:42.714 }, 00:13:42.714 { 00:13:42.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.714 "dma_device_type": 2 00:13:42.714 } 00:13:42.714 ], 00:13:42.714 "driver_specific": { 00:13:42.714 "passthru": { 00:13:42.714 "name": "pt3", 00:13:42.714 "base_bdev_name": "malloc3" 00:13:42.714 } 00:13:42.714 } 00:13:42.714 }' 00:13:42.714 18:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:42.714 18:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:42.714 18:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:42.714 18:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:42.972 18:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:42.972 18:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:42.972 18:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:42.972 18:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:42.972 18:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:42.972 18:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:42.972 18:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:42.972 18:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:42.972 18:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:42.973 18:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:13:43.230 [2024-07-15 18:28:28.641965] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:43.230 18:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 2ab7b665-5135-4b70-9bfb-46526db7ae0e '!=' 2ab7b665-5135-4b70-9bfb-46526db7ae0e ']' 00:13:43.230 18:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:13:43.230 18:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:43.230 18:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:43.230 18:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2792872 00:13:43.230 18:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2792872 ']' 00:13:43.230 18:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2792872 00:13:43.230 18:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:13:43.230 18:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:43.230 18:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2792872 00:13:43.230 18:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:43.230 18:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:43.230 18:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2792872' 00:13:43.230 killing process with pid 2792872 00:13:43.230 18:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2792872 00:13:43.230 [2024-07-15 18:28:28.714321] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:43.230 [2024-07-15 18:28:28.714378] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.230 [2024-07-15 18:28:28.714431] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:43.230 [2024-07-15 18:28:28.714440] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fc10c0 name raid_bdev1, state offline 00:13:43.230 18:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2792872 00:13:43.230 [2024-07-15 18:28:28.740743] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:43.489 18:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:13:43.489 00:13:43.489 real 0m14.593s 00:13:43.489 user 0m26.904s 00:13:43.489 sys 0m2.044s 00:13:43.489 18:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:43.489 18:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.489 ************************************ 00:13:43.489 END TEST raid_superblock_test 00:13:43.489 ************************************ 00:13:43.489 18:28:28 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:43.489 18:28:28 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:13:43.489 18:28:28 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:43.489 18:28:28 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:43.489 18:28:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:43.489 ************************************ 00:13:43.489 START TEST raid_read_error_test 00:13:43.489 ************************************ 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 read 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.VQ9b4v61zO 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2795523 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2795523 /var/tmp/spdk-raid.sock 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2795523 ']' 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:43.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:43.489 18:28:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.747 [2024-07-15 18:28:29.048749] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:13:43.747 [2024-07-15 18:28:29.048808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2795523 ] 00:13:43.747 [2024-07-15 18:28:29.147103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.747 [2024-07-15 18:28:29.243654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.005 [2024-07-15 18:28:29.301850] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:44.005 [2024-07-15 18:28:29.301879] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:44.571 18:28:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:44.571 18:28:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:13:44.571 18:28:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:44.571 18:28:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:44.829 BaseBdev1_malloc 00:13:44.829 18:28:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:45.096 true 00:13:45.096 18:28:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:45.354 [2024-07-15 18:28:30.747621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:45.354 [2024-07-15 18:28:30.747660] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.354 [2024-07-15 18:28:30.747677] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1760d20 00:13:45.354 [2024-07-15 18:28:30.747686] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.354 [2024-07-15 18:28:30.749465] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.354 [2024-07-15 18:28:30.749492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:45.354 BaseBdev1 00:13:45.354 18:28:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:45.354 18:28:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:45.613 BaseBdev2_malloc 00:13:45.613 18:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:45.872 true 00:13:45.872 18:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:46.131 [2024-07-15 18:28:31.510396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:46.131 [2024-07-15 18:28:31.510436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.131 [2024-07-15 18:28:31.510453] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1765d50 00:13:46.131 [2024-07-15 18:28:31.510462] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.131 [2024-07-15 18:28:31.512113] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.131 [2024-07-15 18:28:31.512144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:46.131 BaseBdev2 00:13:46.131 18:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:46.131 18:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:46.390 BaseBdev3_malloc 00:13:46.390 18:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:13:46.649 true 00:13:46.649 18:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:46.909 [2024-07-15 18:28:32.276997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:46.909 [2024-07-15 18:28:32.277036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.909 [2024-07-15 18:28:32.277054] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1764ef0 00:13:46.909 [2024-07-15 18:28:32.277063] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.909 [2024-07-15 18:28:32.278719] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.909 [2024-07-15 18:28:32.278746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:46.909 BaseBdev3 00:13:46.909 18:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:13:47.168 [2024-07-15 18:28:32.529700] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:47.168 [2024-07-15 18:28:32.531083] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:47.168 [2024-07-15 18:28:32.531154] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:47.168 [2024-07-15 18:28:32.531364] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1768a00 00:13:47.168 [2024-07-15 18:28:32.531375] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:47.168 [2024-07-15 18:28:32.531573] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15bc750 00:13:47.168 [2024-07-15 18:28:32.531731] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1768a00 00:13:47.168 [2024-07-15 18:28:32.531740] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1768a00 00:13:47.168 [2024-07-15 18:28:32.531848] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.168 18:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:47.168 18:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:47.168 18:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:47.168 18:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:47.168 18:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:47.168 18:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:47.168 18:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:47.168 18:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:47.168 18:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:47.168 18:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:47.168 18:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:47.168 18:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.427 18:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:47.427 "name": "raid_bdev1", 00:13:47.427 "uuid": "625c3c4e-9a0a-4191-9420-0757d8287522", 00:13:47.427 "strip_size_kb": 64, 00:13:47.427 "state": "online", 00:13:47.427 "raid_level": "raid0", 00:13:47.427 "superblock": true, 00:13:47.427 "num_base_bdevs": 3, 00:13:47.427 "num_base_bdevs_discovered": 3, 00:13:47.427 "num_base_bdevs_operational": 3, 00:13:47.427 "base_bdevs_list": [ 00:13:47.427 { 00:13:47.427 "name": "BaseBdev1", 00:13:47.427 "uuid": "5bd1a9fc-94af-55b7-81d2-883e3d230410", 00:13:47.427 "is_configured": true, 00:13:47.427 "data_offset": 2048, 00:13:47.427 "data_size": 63488 00:13:47.427 }, 00:13:47.427 { 00:13:47.427 "name": "BaseBdev2", 00:13:47.427 "uuid": "d1840052-ab30-5597-95a2-10a1ed55e1ca", 00:13:47.427 "is_configured": true, 00:13:47.427 "data_offset": 2048, 00:13:47.427 "data_size": 63488 00:13:47.427 }, 00:13:47.427 { 00:13:47.427 "name": "BaseBdev3", 00:13:47.427 "uuid": "34cb215f-2a4a-5585-9812-cd953eaf8a0b", 00:13:47.427 "is_configured": true, 00:13:47.427 "data_offset": 2048, 00:13:47.427 "data_size": 63488 00:13:47.427 } 00:13:47.427 ] 00:13:47.427 }' 00:13:47.427 18:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:47.427 18:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.994 18:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:47.994 18:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:13:47.994 [2024-07-15 18:28:33.512617] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1768930 00:13:48.931 18:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:49.190 18:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:13:49.190 18:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:49.191 18:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:13:49.191 18:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:49.191 18:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:49.191 18:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:49.191 18:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:49.191 18:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:49.191 18:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:49.191 18:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:49.191 18:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:49.191 18:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:49.191 18:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:49.191 18:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:49.191 18:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.450 18:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:49.450 "name": "raid_bdev1", 00:13:49.450 "uuid": "625c3c4e-9a0a-4191-9420-0757d8287522", 00:13:49.450 "strip_size_kb": 64, 00:13:49.450 "state": "online", 00:13:49.450 "raid_level": "raid0", 00:13:49.450 "superblock": true, 00:13:49.450 "num_base_bdevs": 3, 00:13:49.450 "num_base_bdevs_discovered": 3, 00:13:49.450 "num_base_bdevs_operational": 3, 00:13:49.450 "base_bdevs_list": [ 00:13:49.450 { 00:13:49.450 "name": "BaseBdev1", 00:13:49.450 "uuid": "5bd1a9fc-94af-55b7-81d2-883e3d230410", 00:13:49.450 "is_configured": true, 00:13:49.450 "data_offset": 2048, 00:13:49.450 "data_size": 63488 00:13:49.450 }, 00:13:49.450 { 00:13:49.450 "name": "BaseBdev2", 00:13:49.450 "uuid": "d1840052-ab30-5597-95a2-10a1ed55e1ca", 00:13:49.450 "is_configured": true, 00:13:49.450 "data_offset": 2048, 00:13:49.450 "data_size": 63488 00:13:49.450 }, 00:13:49.450 { 00:13:49.450 "name": "BaseBdev3", 00:13:49.450 "uuid": "34cb215f-2a4a-5585-9812-cd953eaf8a0b", 00:13:49.450 "is_configured": true, 00:13:49.450 "data_offset": 2048, 00:13:49.450 "data_size": 63488 00:13:49.450 } 00:13:49.450 ] 00:13:49.450 }' 00:13:49.450 18:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:49.450 18:28:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.018 18:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:50.277 [2024-07-15 18:28:35.787581] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:50.277 [2024-07-15 18:28:35.787615] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:50.277 [2024-07-15 18:28:35.791042] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:50.277 [2024-07-15 18:28:35.791077] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.277 [2024-07-15 18:28:35.791109] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:50.277 [2024-07-15 18:28:35.791117] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1768a00 name raid_bdev1, state offline 00:13:50.277 0 00:13:50.277 18:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2795523 00:13:50.277 18:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2795523 ']' 00:13:50.277 18:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2795523 00:13:50.277 18:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:13:50.277 18:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:50.277 18:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2795523 00:13:50.536 18:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:50.536 18:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:50.536 18:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2795523' 00:13:50.536 killing process with pid 2795523 00:13:50.536 18:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2795523 00:13:50.536 [2024-07-15 18:28:35.853929] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:50.536 18:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2795523 00:13:50.536 [2024-07-15 18:28:35.873882] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:50.536 18:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.VQ9b4v61zO 00:13:50.536 18:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:13:50.536 18:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:13:50.536 18:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.44 00:13:50.536 18:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:13:50.537 18:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:50.537 18:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:50.537 18:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.44 != \0\.\0\0 ]] 00:13:50.537 00:13:50.537 real 0m7.109s 00:13:50.537 user 0m11.626s 00:13:50.537 sys 0m0.964s 00:13:50.537 18:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:50.537 18:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.537 ************************************ 00:13:50.537 END TEST raid_read_error_test 00:13:50.537 ************************************ 00:13:50.796 18:28:36 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:50.796 18:28:36 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:13:50.796 18:28:36 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:50.796 18:28:36 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:50.796 18:28:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:50.796 ************************************ 00:13:50.796 START TEST raid_write_error_test 00:13:50.796 ************************************ 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 write 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.Gypya7MXVG 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2796688 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2796688 /var/tmp/spdk-raid.sock 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2796688 ']' 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:50.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:50.796 18:28:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.796 [2024-07-15 18:28:36.202101] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:13:50.796 [2024-07-15 18:28:36.202167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2796688 ] 00:13:50.796 [2024-07-15 18:28:36.302245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.055 [2024-07-15 18:28:36.393220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.055 [2024-07-15 18:28:36.459249] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.055 [2024-07-15 18:28:36.459286] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.622 18:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:51.622 18:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:13:51.622 18:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:51.622 18:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:51.881 BaseBdev1_malloc 00:13:51.881 18:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:52.140 true 00:13:52.140 18:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:52.399 [2024-07-15 18:28:37.909716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:52.399 [2024-07-15 18:28:37.909760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.399 [2024-07-15 18:28:37.909778] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaedd20 00:13:52.399 [2024-07-15 18:28:37.909788] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.399 [2024-07-15 18:28:37.911470] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.399 [2024-07-15 18:28:37.911497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:52.399 BaseBdev1 00:13:52.399 18:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:52.399 18:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:52.658 BaseBdev2_malloc 00:13:52.658 18:28:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:52.917 true 00:13:52.917 18:28:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:53.176 [2024-07-15 18:28:38.688488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:53.176 [2024-07-15 18:28:38.688530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.176 [2024-07-15 18:28:38.688547] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaf2d50 00:13:53.176 [2024-07-15 18:28:38.688557] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.176 [2024-07-15 18:28:38.690045] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.176 [2024-07-15 18:28:38.690072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:53.176 BaseBdev2 00:13:53.176 18:28:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:53.176 18:28:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:53.435 BaseBdev3_malloc 00:13:53.435 18:28:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:13:53.694 true 00:13:53.694 18:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:53.952 [2024-07-15 18:28:39.475059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:53.952 [2024-07-15 18:28:39.475104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.952 [2024-07-15 18:28:39.475122] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaf1ef0 00:13:53.952 [2024-07-15 18:28:39.475132] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.952 [2024-07-15 18:28:39.476606] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.952 [2024-07-15 18:28:39.476633] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:53.952 BaseBdev3 00:13:53.952 18:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:13:54.211 [2024-07-15 18:28:39.735777] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:54.211 [2024-07-15 18:28:39.737028] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:54.211 [2024-07-15 18:28:39.737094] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:54.211 [2024-07-15 18:28:39.737295] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xaf5a00 00:13:54.211 [2024-07-15 18:28:39.737305] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:54.211 [2024-07-15 18:28:39.737489] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x949750 00:13:54.211 [2024-07-15 18:28:39.737638] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xaf5a00 00:13:54.211 [2024-07-15 18:28:39.737647] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xaf5a00 00:13:54.211 [2024-07-15 18:28:39.737747] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.211 18:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:54.211 18:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:54.211 18:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:54.211 18:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:54.211 18:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:54.211 18:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:54.211 18:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:54.211 18:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:54.211 18:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:54.211 18:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:54.471 18:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:54.471 18:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.471 18:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:54.471 "name": "raid_bdev1", 00:13:54.471 "uuid": "be5a57ad-29ec-44f4-99bc-bc41fc9d93a0", 00:13:54.471 "strip_size_kb": 64, 00:13:54.471 "state": "online", 00:13:54.471 "raid_level": "raid0", 00:13:54.471 "superblock": true, 00:13:54.471 "num_base_bdevs": 3, 00:13:54.471 "num_base_bdevs_discovered": 3, 00:13:54.471 "num_base_bdevs_operational": 3, 00:13:54.471 "base_bdevs_list": [ 00:13:54.471 { 00:13:54.471 "name": "BaseBdev1", 00:13:54.471 "uuid": "0c64b741-43fb-5f12-b7af-acd9615f9a1d", 00:13:54.471 "is_configured": true, 00:13:54.471 "data_offset": 2048, 00:13:54.471 "data_size": 63488 00:13:54.471 }, 00:13:54.471 { 00:13:54.471 "name": "BaseBdev2", 00:13:54.471 "uuid": "6a5ea71c-80cf-5689-8181-55a556423da8", 00:13:54.471 "is_configured": true, 00:13:54.471 "data_offset": 2048, 00:13:54.471 "data_size": 63488 00:13:54.471 }, 00:13:54.471 { 00:13:54.471 "name": "BaseBdev3", 00:13:54.471 "uuid": "f3bfae7d-f8a3-5975-981d-0df3097459e2", 00:13:54.471 "is_configured": true, 00:13:54.471 "data_offset": 2048, 00:13:54.471 "data_size": 63488 00:13:54.471 } 00:13:54.471 ] 00:13:54.471 }' 00:13:54.471 18:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:54.471 18:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.408 18:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:13:55.408 18:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:55.408 [2024-07-15 18:28:40.746771] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xaf5930 00:13:56.345 18:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:56.345 18:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:13:56.345 18:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:56.345 18:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:13:56.604 18:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:56.605 18:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:56.605 18:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:56.605 18:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:56.605 18:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:56.605 18:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:56.605 18:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:56.605 18:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:56.605 18:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:56.605 18:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:56.605 18:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.605 18:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.864 18:28:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:56.864 "name": "raid_bdev1", 00:13:56.864 "uuid": "be5a57ad-29ec-44f4-99bc-bc41fc9d93a0", 00:13:56.864 "strip_size_kb": 64, 00:13:56.864 "state": "online", 00:13:56.864 "raid_level": "raid0", 00:13:56.864 "superblock": true, 00:13:56.864 "num_base_bdevs": 3, 00:13:56.864 "num_base_bdevs_discovered": 3, 00:13:56.864 "num_base_bdevs_operational": 3, 00:13:56.864 "base_bdevs_list": [ 00:13:56.864 { 00:13:56.864 "name": "BaseBdev1", 00:13:56.864 "uuid": "0c64b741-43fb-5f12-b7af-acd9615f9a1d", 00:13:56.864 "is_configured": true, 00:13:56.864 "data_offset": 2048, 00:13:56.864 "data_size": 63488 00:13:56.864 }, 00:13:56.864 { 00:13:56.864 "name": "BaseBdev2", 00:13:56.864 "uuid": "6a5ea71c-80cf-5689-8181-55a556423da8", 00:13:56.864 "is_configured": true, 00:13:56.864 "data_offset": 2048, 00:13:56.864 "data_size": 63488 00:13:56.864 }, 00:13:56.864 { 00:13:56.864 "name": "BaseBdev3", 00:13:56.864 "uuid": "f3bfae7d-f8a3-5975-981d-0df3097459e2", 00:13:56.864 "is_configured": true, 00:13:56.864 "data_offset": 2048, 00:13:56.864 "data_size": 63488 00:13:56.864 } 00:13:56.864 ] 00:13:56.864 }' 00:13:56.864 18:28:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:56.864 18:28:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.479 18:28:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:57.479 [2024-07-15 18:28:42.920114] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:57.479 [2024-07-15 18:28:42.920155] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:57.479 [2024-07-15 18:28:42.923571] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:57.479 [2024-07-15 18:28:42.923606] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.479 [2024-07-15 18:28:42.923639] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:57.479 [2024-07-15 18:28:42.923654] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xaf5a00 name raid_bdev1, state offline 00:13:57.479 0 00:13:57.479 18:28:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2796688 00:13:57.479 18:28:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2796688 ']' 00:13:57.479 18:28:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2796688 00:13:57.479 18:28:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:13:57.479 18:28:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:57.479 18:28:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2796688 00:13:57.479 18:28:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:57.480 18:28:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:57.480 18:28:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2796688' 00:13:57.480 killing process with pid 2796688 00:13:57.480 18:28:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2796688 00:13:57.480 [2024-07-15 18:28:42.994990] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:57.480 18:28:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2796688 00:13:57.480 [2024-07-15 18:28:43.014708] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:57.739 18:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.Gypya7MXVG 00:13:57.739 18:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:13:57.739 18:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:13:57.739 18:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.46 00:13:57.739 18:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:13:57.739 18:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:57.739 18:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:57.739 18:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.46 != \0\.\0\0 ]] 00:13:57.739 00:13:57.739 real 0m7.093s 00:13:57.739 user 0m11.542s 00:13:57.739 sys 0m1.002s 00:13:57.739 18:28:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:57.739 18:28:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.739 ************************************ 00:13:57.739 END TEST raid_write_error_test 00:13:57.739 ************************************ 00:13:57.739 18:28:43 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:57.739 18:28:43 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:13:57.739 18:28:43 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:13:57.739 18:28:43 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:57.739 18:28:43 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:57.739 18:28:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:57.739 ************************************ 00:13:57.739 START TEST raid_state_function_test 00:13:57.739 ************************************ 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 false 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2797857 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2797857' 00:13:57.739 Process raid pid: 2797857 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2797857 /var/tmp/spdk-raid.sock 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2797857 ']' 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:57.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:57.739 18:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.998 [2024-07-15 18:28:43.331773] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:13:57.998 [2024-07-15 18:28:43.331831] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.998 [2024-07-15 18:28:43.446293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.257 [2024-07-15 18:28:43.577037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.257 [2024-07-15 18:28:43.640487] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:58.257 [2024-07-15 18:28:43.640515] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:58.257 18:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:58.257 18:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:13:58.257 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:58.516 [2024-07-15 18:28:43.929549] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:58.516 [2024-07-15 18:28:43.929588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:58.516 [2024-07-15 18:28:43.929597] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:58.516 [2024-07-15 18:28:43.929606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:58.517 [2024-07-15 18:28:43.929616] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:58.517 [2024-07-15 18:28:43.929624] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:58.517 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:58.517 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:58.517 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:58.517 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:58.517 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:58.517 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:58.517 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:58.517 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:58.517 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:58.517 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:58.517 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:58.517 18:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.775 18:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:58.775 "name": "Existed_Raid", 00:13:58.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.775 "strip_size_kb": 64, 00:13:58.775 "state": "configuring", 00:13:58.775 "raid_level": "concat", 00:13:58.775 "superblock": false, 00:13:58.775 "num_base_bdevs": 3, 00:13:58.775 "num_base_bdevs_discovered": 0, 00:13:58.775 "num_base_bdevs_operational": 3, 00:13:58.775 "base_bdevs_list": [ 00:13:58.775 { 00:13:58.775 "name": "BaseBdev1", 00:13:58.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.775 "is_configured": false, 00:13:58.775 "data_offset": 0, 00:13:58.775 "data_size": 0 00:13:58.775 }, 00:13:58.775 { 00:13:58.775 "name": "BaseBdev2", 00:13:58.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.776 "is_configured": false, 00:13:58.776 "data_offset": 0, 00:13:58.776 "data_size": 0 00:13:58.776 }, 00:13:58.776 { 00:13:58.776 "name": "BaseBdev3", 00:13:58.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.776 "is_configured": false, 00:13:58.776 "data_offset": 0, 00:13:58.776 "data_size": 0 00:13:58.776 } 00:13:58.776 ] 00:13:58.776 }' 00:13:58.776 18:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:58.776 18:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.711 18:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:59.711 [2024-07-15 18:28:45.240909] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:59.711 [2024-07-15 18:28:45.240941] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xcb2ba0 name Existed_Raid, state configuring 00:13:59.969 18:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:59.969 [2024-07-15 18:28:45.501719] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:59.970 [2024-07-15 18:28:45.501757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:59.970 [2024-07-15 18:28:45.501765] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:59.970 [2024-07-15 18:28:45.501774] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:59.970 [2024-07-15 18:28:45.501780] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:59.970 [2024-07-15 18:28:45.501788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:00.228 18:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:00.228 [2024-07-15 18:28:45.771748] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:00.228 BaseBdev1 00:14:00.486 18:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:00.486 18:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:00.486 18:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:00.486 18:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:00.486 18:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:00.486 18:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:00.486 18:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:00.744 18:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:00.744 [ 00:14:00.744 { 00:14:00.744 "name": "BaseBdev1", 00:14:00.744 "aliases": [ 00:14:00.744 "244f90c9-cf5f-42d4-a191-89cbf40ad82d" 00:14:00.744 ], 00:14:00.744 "product_name": "Malloc disk", 00:14:00.744 "block_size": 512, 00:14:00.744 "num_blocks": 65536, 00:14:00.744 "uuid": "244f90c9-cf5f-42d4-a191-89cbf40ad82d", 00:14:00.744 "assigned_rate_limits": { 00:14:00.744 "rw_ios_per_sec": 0, 00:14:00.744 "rw_mbytes_per_sec": 0, 00:14:00.744 "r_mbytes_per_sec": 0, 00:14:00.744 "w_mbytes_per_sec": 0 00:14:00.744 }, 00:14:00.744 "claimed": true, 00:14:00.744 "claim_type": "exclusive_write", 00:14:00.744 "zoned": false, 00:14:00.744 "supported_io_types": { 00:14:00.744 "read": true, 00:14:00.744 "write": true, 00:14:00.744 "unmap": true, 00:14:00.744 "flush": true, 00:14:00.744 "reset": true, 00:14:00.744 "nvme_admin": false, 00:14:00.744 "nvme_io": false, 00:14:00.744 "nvme_io_md": false, 00:14:00.744 "write_zeroes": true, 00:14:00.744 "zcopy": true, 00:14:00.744 "get_zone_info": false, 00:14:00.744 "zone_management": false, 00:14:00.744 "zone_append": false, 00:14:00.744 "compare": false, 00:14:00.744 "compare_and_write": false, 00:14:00.744 "abort": true, 00:14:00.744 "seek_hole": false, 00:14:00.744 "seek_data": false, 00:14:00.744 "copy": true, 00:14:00.744 "nvme_iov_md": false 00:14:00.744 }, 00:14:00.744 "memory_domains": [ 00:14:00.744 { 00:14:00.744 "dma_device_id": "system", 00:14:00.744 "dma_device_type": 1 00:14:00.744 }, 00:14:00.744 { 00:14:00.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.744 "dma_device_type": 2 00:14:00.744 } 00:14:00.744 ], 00:14:00.744 "driver_specific": {} 00:14:00.744 } 00:14:00.744 ] 00:14:01.002 18:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:01.002 18:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:01.002 18:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:01.002 18:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:01.002 18:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:01.002 18:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:01.002 18:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:01.002 18:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:01.002 18:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:01.002 18:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:01.002 18:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:01.002 18:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.002 18:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.260 18:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:01.260 "name": "Existed_Raid", 00:14:01.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.260 "strip_size_kb": 64, 00:14:01.260 "state": "configuring", 00:14:01.260 "raid_level": "concat", 00:14:01.260 "superblock": false, 00:14:01.260 "num_base_bdevs": 3, 00:14:01.260 "num_base_bdevs_discovered": 1, 00:14:01.260 "num_base_bdevs_operational": 3, 00:14:01.260 "base_bdevs_list": [ 00:14:01.260 { 00:14:01.260 "name": "BaseBdev1", 00:14:01.260 "uuid": "244f90c9-cf5f-42d4-a191-89cbf40ad82d", 00:14:01.260 "is_configured": true, 00:14:01.260 "data_offset": 0, 00:14:01.260 "data_size": 65536 00:14:01.260 }, 00:14:01.260 { 00:14:01.260 "name": "BaseBdev2", 00:14:01.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.260 "is_configured": false, 00:14:01.260 "data_offset": 0, 00:14:01.260 "data_size": 0 00:14:01.260 }, 00:14:01.260 { 00:14:01.260 "name": "BaseBdev3", 00:14:01.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.260 "is_configured": false, 00:14:01.260 "data_offset": 0, 00:14:01.260 "data_size": 0 00:14:01.260 } 00:14:01.260 ] 00:14:01.260 }' 00:14:01.260 18:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:01.260 18:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.193 18:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:02.193 [2024-07-15 18:28:47.676873] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:02.193 [2024-07-15 18:28:47.676911] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xcb2470 name Existed_Raid, state configuring 00:14:02.193 18:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:02.451 [2024-07-15 18:28:47.933593] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:02.451 [2024-07-15 18:28:47.935168] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:02.451 [2024-07-15 18:28:47.935201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:02.451 [2024-07-15 18:28:47.935209] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:02.451 [2024-07-15 18:28:47.935217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:02.451 18:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:02.451 18:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:02.451 18:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:02.451 18:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:02.451 18:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:02.451 18:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:02.451 18:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:02.451 18:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:02.451 18:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:02.451 18:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:02.451 18:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:02.451 18:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:02.451 18:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:02.451 18:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.709 18:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:02.709 "name": "Existed_Raid", 00:14:02.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.709 "strip_size_kb": 64, 00:14:02.709 "state": "configuring", 00:14:02.709 "raid_level": "concat", 00:14:02.709 "superblock": false, 00:14:02.709 "num_base_bdevs": 3, 00:14:02.709 "num_base_bdevs_discovered": 1, 00:14:02.709 "num_base_bdevs_operational": 3, 00:14:02.709 "base_bdevs_list": [ 00:14:02.709 { 00:14:02.709 "name": "BaseBdev1", 00:14:02.709 "uuid": "244f90c9-cf5f-42d4-a191-89cbf40ad82d", 00:14:02.709 "is_configured": true, 00:14:02.709 "data_offset": 0, 00:14:02.709 "data_size": 65536 00:14:02.709 }, 00:14:02.709 { 00:14:02.709 "name": "BaseBdev2", 00:14:02.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.709 "is_configured": false, 00:14:02.709 "data_offset": 0, 00:14:02.709 "data_size": 0 00:14:02.709 }, 00:14:02.709 { 00:14:02.709 "name": "BaseBdev3", 00:14:02.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.709 "is_configured": false, 00:14:02.709 "data_offset": 0, 00:14:02.709 "data_size": 0 00:14:02.709 } 00:14:02.709 ] 00:14:02.709 }' 00:14:02.709 18:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:02.709 18:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.642 18:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:03.642 [2024-07-15 18:28:49.099976] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:03.642 BaseBdev2 00:14:03.642 18:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:03.642 18:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:03.642 18:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:03.642 18:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:03.642 18:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:03.642 18:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:03.642 18:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:03.900 18:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:04.158 [ 00:14:04.158 { 00:14:04.158 "name": "BaseBdev2", 00:14:04.158 "aliases": [ 00:14:04.158 "925ebc65-fbbb-4d62-aee2-eb7f25d8386d" 00:14:04.158 ], 00:14:04.158 "product_name": "Malloc disk", 00:14:04.158 "block_size": 512, 00:14:04.158 "num_blocks": 65536, 00:14:04.158 "uuid": "925ebc65-fbbb-4d62-aee2-eb7f25d8386d", 00:14:04.158 "assigned_rate_limits": { 00:14:04.158 "rw_ios_per_sec": 0, 00:14:04.158 "rw_mbytes_per_sec": 0, 00:14:04.158 "r_mbytes_per_sec": 0, 00:14:04.158 "w_mbytes_per_sec": 0 00:14:04.158 }, 00:14:04.158 "claimed": true, 00:14:04.158 "claim_type": "exclusive_write", 00:14:04.158 "zoned": false, 00:14:04.158 "supported_io_types": { 00:14:04.158 "read": true, 00:14:04.158 "write": true, 00:14:04.158 "unmap": true, 00:14:04.158 "flush": true, 00:14:04.158 "reset": true, 00:14:04.158 "nvme_admin": false, 00:14:04.158 "nvme_io": false, 00:14:04.158 "nvme_io_md": false, 00:14:04.158 "write_zeroes": true, 00:14:04.158 "zcopy": true, 00:14:04.158 "get_zone_info": false, 00:14:04.158 "zone_management": false, 00:14:04.158 "zone_append": false, 00:14:04.158 "compare": false, 00:14:04.158 "compare_and_write": false, 00:14:04.158 "abort": true, 00:14:04.158 "seek_hole": false, 00:14:04.158 "seek_data": false, 00:14:04.158 "copy": true, 00:14:04.158 "nvme_iov_md": false 00:14:04.158 }, 00:14:04.158 "memory_domains": [ 00:14:04.158 { 00:14:04.158 "dma_device_id": "system", 00:14:04.158 "dma_device_type": 1 00:14:04.158 }, 00:14:04.158 { 00:14:04.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.158 "dma_device_type": 2 00:14:04.158 } 00:14:04.158 ], 00:14:04.158 "driver_specific": {} 00:14:04.158 } 00:14:04.158 ] 00:14:04.158 18:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:04.158 18:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:04.158 18:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:04.158 18:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:04.158 18:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:04.158 18:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:04.158 18:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:04.158 18:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:04.158 18:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:04.158 18:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:04.158 18:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:04.158 18:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:04.158 18:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:04.158 18:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:04.158 18:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.416 18:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:04.416 "name": "Existed_Raid", 00:14:04.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.416 "strip_size_kb": 64, 00:14:04.416 "state": "configuring", 00:14:04.416 "raid_level": "concat", 00:14:04.416 "superblock": false, 00:14:04.416 "num_base_bdevs": 3, 00:14:04.416 "num_base_bdevs_discovered": 2, 00:14:04.416 "num_base_bdevs_operational": 3, 00:14:04.416 "base_bdevs_list": [ 00:14:04.416 { 00:14:04.416 "name": "BaseBdev1", 00:14:04.416 "uuid": "244f90c9-cf5f-42d4-a191-89cbf40ad82d", 00:14:04.416 "is_configured": true, 00:14:04.416 "data_offset": 0, 00:14:04.416 "data_size": 65536 00:14:04.416 }, 00:14:04.416 { 00:14:04.416 "name": "BaseBdev2", 00:14:04.416 "uuid": "925ebc65-fbbb-4d62-aee2-eb7f25d8386d", 00:14:04.416 "is_configured": true, 00:14:04.416 "data_offset": 0, 00:14:04.416 "data_size": 65536 00:14:04.416 }, 00:14:04.416 { 00:14:04.416 "name": "BaseBdev3", 00:14:04.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.416 "is_configured": false, 00:14:04.416 "data_offset": 0, 00:14:04.416 "data_size": 0 00:14:04.416 } 00:14:04.416 ] 00:14:04.416 }' 00:14:04.416 18:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:04.416 18:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.000 18:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:05.259 [2024-07-15 18:28:50.775793] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:05.259 [2024-07-15 18:28:50.775826] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xcb3360 00:14:05.259 [2024-07-15 18:28:50.775832] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:05.259 [2024-07-15 18:28:50.776045] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe57cd0 00:14:05.259 [2024-07-15 18:28:50.776191] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xcb3360 00:14:05.259 [2024-07-15 18:28:50.776200] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xcb3360 00:14:05.259 [2024-07-15 18:28:50.776366] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.259 BaseBdev3 00:14:05.259 18:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:14:05.259 18:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:05.259 18:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:05.259 18:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:05.259 18:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:05.259 18:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:05.259 18:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:05.517 18:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:05.775 [ 00:14:05.775 { 00:14:05.775 "name": "BaseBdev3", 00:14:05.775 "aliases": [ 00:14:05.775 "1fd3c01f-ebd6-4f1f-bc51-088fcaef5441" 00:14:05.775 ], 00:14:05.775 "product_name": "Malloc disk", 00:14:05.775 "block_size": 512, 00:14:05.775 "num_blocks": 65536, 00:14:05.775 "uuid": "1fd3c01f-ebd6-4f1f-bc51-088fcaef5441", 00:14:05.775 "assigned_rate_limits": { 00:14:05.775 "rw_ios_per_sec": 0, 00:14:05.775 "rw_mbytes_per_sec": 0, 00:14:05.775 "r_mbytes_per_sec": 0, 00:14:05.775 "w_mbytes_per_sec": 0 00:14:05.775 }, 00:14:05.775 "claimed": true, 00:14:05.775 "claim_type": "exclusive_write", 00:14:05.775 "zoned": false, 00:14:05.775 "supported_io_types": { 00:14:05.775 "read": true, 00:14:05.775 "write": true, 00:14:05.775 "unmap": true, 00:14:05.775 "flush": true, 00:14:05.775 "reset": true, 00:14:05.775 "nvme_admin": false, 00:14:05.775 "nvme_io": false, 00:14:05.775 "nvme_io_md": false, 00:14:05.775 "write_zeroes": true, 00:14:05.775 "zcopy": true, 00:14:05.775 "get_zone_info": false, 00:14:05.775 "zone_management": false, 00:14:05.775 "zone_append": false, 00:14:05.775 "compare": false, 00:14:05.775 "compare_and_write": false, 00:14:05.775 "abort": true, 00:14:05.775 "seek_hole": false, 00:14:05.775 "seek_data": false, 00:14:05.775 "copy": true, 00:14:05.775 "nvme_iov_md": false 00:14:05.775 }, 00:14:05.775 "memory_domains": [ 00:14:05.775 { 00:14:05.775 "dma_device_id": "system", 00:14:05.775 "dma_device_type": 1 00:14:05.775 }, 00:14:05.775 { 00:14:05.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.775 "dma_device_type": 2 00:14:05.775 } 00:14:05.775 ], 00:14:05.775 "driver_specific": {} 00:14:05.775 } 00:14:05.775 ] 00:14:05.775 18:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:05.775 18:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:05.775 18:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:05.775 18:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:14:05.775 18:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:05.775 18:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:05.775 18:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:05.775 18:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:05.775 18:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:05.775 18:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:05.775 18:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:05.775 18:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:05.775 18:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:05.775 18:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:05.775 18:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.033 18:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:06.033 "name": "Existed_Raid", 00:14:06.033 "uuid": "90673224-c28a-4eaf-b690-c4b0047432ad", 00:14:06.033 "strip_size_kb": 64, 00:14:06.034 "state": "online", 00:14:06.034 "raid_level": "concat", 00:14:06.034 "superblock": false, 00:14:06.034 "num_base_bdevs": 3, 00:14:06.034 "num_base_bdevs_discovered": 3, 00:14:06.034 "num_base_bdevs_operational": 3, 00:14:06.034 "base_bdevs_list": [ 00:14:06.034 { 00:14:06.034 "name": "BaseBdev1", 00:14:06.034 "uuid": "244f90c9-cf5f-42d4-a191-89cbf40ad82d", 00:14:06.034 "is_configured": true, 00:14:06.034 "data_offset": 0, 00:14:06.034 "data_size": 65536 00:14:06.034 }, 00:14:06.034 { 00:14:06.034 "name": "BaseBdev2", 00:14:06.034 "uuid": "925ebc65-fbbb-4d62-aee2-eb7f25d8386d", 00:14:06.034 "is_configured": true, 00:14:06.034 "data_offset": 0, 00:14:06.034 "data_size": 65536 00:14:06.034 }, 00:14:06.034 { 00:14:06.034 "name": "BaseBdev3", 00:14:06.034 "uuid": "1fd3c01f-ebd6-4f1f-bc51-088fcaef5441", 00:14:06.034 "is_configured": true, 00:14:06.034 "data_offset": 0, 00:14:06.034 "data_size": 65536 00:14:06.034 } 00:14:06.034 ] 00:14:06.034 }' 00:14:06.034 18:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:06.034 18:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.969 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:06.969 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:06.969 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:06.969 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:06.969 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:06.969 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:06.969 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:06.969 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:06.969 [2024-07-15 18:28:52.440589] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:06.969 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:06.969 "name": "Existed_Raid", 00:14:06.969 "aliases": [ 00:14:06.969 "90673224-c28a-4eaf-b690-c4b0047432ad" 00:14:06.969 ], 00:14:06.969 "product_name": "Raid Volume", 00:14:06.969 "block_size": 512, 00:14:06.969 "num_blocks": 196608, 00:14:06.969 "uuid": "90673224-c28a-4eaf-b690-c4b0047432ad", 00:14:06.969 "assigned_rate_limits": { 00:14:06.969 "rw_ios_per_sec": 0, 00:14:06.969 "rw_mbytes_per_sec": 0, 00:14:06.969 "r_mbytes_per_sec": 0, 00:14:06.969 "w_mbytes_per_sec": 0 00:14:06.969 }, 00:14:06.969 "claimed": false, 00:14:06.969 "zoned": false, 00:14:06.969 "supported_io_types": { 00:14:06.969 "read": true, 00:14:06.969 "write": true, 00:14:06.969 "unmap": true, 00:14:06.969 "flush": true, 00:14:06.969 "reset": true, 00:14:06.969 "nvme_admin": false, 00:14:06.969 "nvme_io": false, 00:14:06.969 "nvme_io_md": false, 00:14:06.969 "write_zeroes": true, 00:14:06.969 "zcopy": false, 00:14:06.969 "get_zone_info": false, 00:14:06.969 "zone_management": false, 00:14:06.969 "zone_append": false, 00:14:06.969 "compare": false, 00:14:06.969 "compare_and_write": false, 00:14:06.969 "abort": false, 00:14:06.969 "seek_hole": false, 00:14:06.969 "seek_data": false, 00:14:06.969 "copy": false, 00:14:06.969 "nvme_iov_md": false 00:14:06.969 }, 00:14:06.969 "memory_domains": [ 00:14:06.969 { 00:14:06.969 "dma_device_id": "system", 00:14:06.969 "dma_device_type": 1 00:14:06.969 }, 00:14:06.969 { 00:14:06.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.969 "dma_device_type": 2 00:14:06.969 }, 00:14:06.969 { 00:14:06.969 "dma_device_id": "system", 00:14:06.969 "dma_device_type": 1 00:14:06.969 }, 00:14:06.969 { 00:14:06.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.969 "dma_device_type": 2 00:14:06.969 }, 00:14:06.969 { 00:14:06.969 "dma_device_id": "system", 00:14:06.969 "dma_device_type": 1 00:14:06.969 }, 00:14:06.969 { 00:14:06.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.969 "dma_device_type": 2 00:14:06.969 } 00:14:06.969 ], 00:14:06.969 "driver_specific": { 00:14:06.969 "raid": { 00:14:06.969 "uuid": "90673224-c28a-4eaf-b690-c4b0047432ad", 00:14:06.969 "strip_size_kb": 64, 00:14:06.969 "state": "online", 00:14:06.969 "raid_level": "concat", 00:14:06.969 "superblock": false, 00:14:06.969 "num_base_bdevs": 3, 00:14:06.969 "num_base_bdevs_discovered": 3, 00:14:06.969 "num_base_bdevs_operational": 3, 00:14:06.969 "base_bdevs_list": [ 00:14:06.969 { 00:14:06.969 "name": "BaseBdev1", 00:14:06.969 "uuid": "244f90c9-cf5f-42d4-a191-89cbf40ad82d", 00:14:06.969 "is_configured": true, 00:14:06.969 "data_offset": 0, 00:14:06.969 "data_size": 65536 00:14:06.969 }, 00:14:06.969 { 00:14:06.969 "name": "BaseBdev2", 00:14:06.969 "uuid": "925ebc65-fbbb-4d62-aee2-eb7f25d8386d", 00:14:06.969 "is_configured": true, 00:14:06.969 "data_offset": 0, 00:14:06.969 "data_size": 65536 00:14:06.969 }, 00:14:06.969 { 00:14:06.969 "name": "BaseBdev3", 00:14:06.969 "uuid": "1fd3c01f-ebd6-4f1f-bc51-088fcaef5441", 00:14:06.969 "is_configured": true, 00:14:06.969 "data_offset": 0, 00:14:06.969 "data_size": 65536 00:14:06.969 } 00:14:06.969 ] 00:14:06.969 } 00:14:06.969 } 00:14:06.969 }' 00:14:06.969 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:06.969 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:06.969 BaseBdev2 00:14:06.969 BaseBdev3' 00:14:06.969 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:06.969 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:06.969 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:07.536 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:07.536 "name": "BaseBdev1", 00:14:07.536 "aliases": [ 00:14:07.536 "244f90c9-cf5f-42d4-a191-89cbf40ad82d" 00:14:07.536 ], 00:14:07.536 "product_name": "Malloc disk", 00:14:07.536 "block_size": 512, 00:14:07.536 "num_blocks": 65536, 00:14:07.536 "uuid": "244f90c9-cf5f-42d4-a191-89cbf40ad82d", 00:14:07.536 "assigned_rate_limits": { 00:14:07.536 "rw_ios_per_sec": 0, 00:14:07.536 "rw_mbytes_per_sec": 0, 00:14:07.536 "r_mbytes_per_sec": 0, 00:14:07.536 "w_mbytes_per_sec": 0 00:14:07.536 }, 00:14:07.536 "claimed": true, 00:14:07.536 "claim_type": "exclusive_write", 00:14:07.536 "zoned": false, 00:14:07.536 "supported_io_types": { 00:14:07.536 "read": true, 00:14:07.536 "write": true, 00:14:07.536 "unmap": true, 00:14:07.536 "flush": true, 00:14:07.536 "reset": true, 00:14:07.536 "nvme_admin": false, 00:14:07.536 "nvme_io": false, 00:14:07.536 "nvme_io_md": false, 00:14:07.536 "write_zeroes": true, 00:14:07.536 "zcopy": true, 00:14:07.536 "get_zone_info": false, 00:14:07.536 "zone_management": false, 00:14:07.536 "zone_append": false, 00:14:07.536 "compare": false, 00:14:07.536 "compare_and_write": false, 00:14:07.536 "abort": true, 00:14:07.536 "seek_hole": false, 00:14:07.536 "seek_data": false, 00:14:07.536 "copy": true, 00:14:07.536 "nvme_iov_md": false 00:14:07.536 }, 00:14:07.536 "memory_domains": [ 00:14:07.536 { 00:14:07.536 "dma_device_id": "system", 00:14:07.536 "dma_device_type": 1 00:14:07.536 }, 00:14:07.536 { 00:14:07.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.536 "dma_device_type": 2 00:14:07.536 } 00:14:07.536 ], 00:14:07.536 "driver_specific": {} 00:14:07.536 }' 00:14:07.536 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:07.536 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:07.536 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:07.536 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:07.536 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:07.536 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:07.536 18:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:07.536 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:07.536 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:07.536 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:07.794 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:07.794 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:07.794 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:07.794 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:07.794 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:08.053 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:08.053 "name": "BaseBdev2", 00:14:08.053 "aliases": [ 00:14:08.053 "925ebc65-fbbb-4d62-aee2-eb7f25d8386d" 00:14:08.053 ], 00:14:08.053 "product_name": "Malloc disk", 00:14:08.053 "block_size": 512, 00:14:08.053 "num_blocks": 65536, 00:14:08.053 "uuid": "925ebc65-fbbb-4d62-aee2-eb7f25d8386d", 00:14:08.053 "assigned_rate_limits": { 00:14:08.053 "rw_ios_per_sec": 0, 00:14:08.053 "rw_mbytes_per_sec": 0, 00:14:08.053 "r_mbytes_per_sec": 0, 00:14:08.053 "w_mbytes_per_sec": 0 00:14:08.053 }, 00:14:08.053 "claimed": true, 00:14:08.053 "claim_type": "exclusive_write", 00:14:08.053 "zoned": false, 00:14:08.053 "supported_io_types": { 00:14:08.053 "read": true, 00:14:08.053 "write": true, 00:14:08.053 "unmap": true, 00:14:08.053 "flush": true, 00:14:08.053 "reset": true, 00:14:08.053 "nvme_admin": false, 00:14:08.053 "nvme_io": false, 00:14:08.053 "nvme_io_md": false, 00:14:08.053 "write_zeroes": true, 00:14:08.053 "zcopy": true, 00:14:08.053 "get_zone_info": false, 00:14:08.053 "zone_management": false, 00:14:08.053 "zone_append": false, 00:14:08.053 "compare": false, 00:14:08.053 "compare_and_write": false, 00:14:08.053 "abort": true, 00:14:08.053 "seek_hole": false, 00:14:08.053 "seek_data": false, 00:14:08.053 "copy": true, 00:14:08.053 "nvme_iov_md": false 00:14:08.053 }, 00:14:08.053 "memory_domains": [ 00:14:08.053 { 00:14:08.053 "dma_device_id": "system", 00:14:08.053 "dma_device_type": 1 00:14:08.053 }, 00:14:08.053 { 00:14:08.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.053 "dma_device_type": 2 00:14:08.053 } 00:14:08.053 ], 00:14:08.053 "driver_specific": {} 00:14:08.053 }' 00:14:08.053 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:08.053 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:08.053 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:08.053 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:08.053 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:08.053 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:08.053 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:08.311 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:08.311 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:08.311 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:08.311 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:08.311 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:08.311 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:08.311 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:08.311 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:08.570 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:08.570 "name": "BaseBdev3", 00:14:08.570 "aliases": [ 00:14:08.570 "1fd3c01f-ebd6-4f1f-bc51-088fcaef5441" 00:14:08.570 ], 00:14:08.570 "product_name": "Malloc disk", 00:14:08.570 "block_size": 512, 00:14:08.570 "num_blocks": 65536, 00:14:08.570 "uuid": "1fd3c01f-ebd6-4f1f-bc51-088fcaef5441", 00:14:08.570 "assigned_rate_limits": { 00:14:08.570 "rw_ios_per_sec": 0, 00:14:08.570 "rw_mbytes_per_sec": 0, 00:14:08.570 "r_mbytes_per_sec": 0, 00:14:08.570 "w_mbytes_per_sec": 0 00:14:08.570 }, 00:14:08.570 "claimed": true, 00:14:08.570 "claim_type": "exclusive_write", 00:14:08.570 "zoned": false, 00:14:08.570 "supported_io_types": { 00:14:08.570 "read": true, 00:14:08.570 "write": true, 00:14:08.570 "unmap": true, 00:14:08.570 "flush": true, 00:14:08.570 "reset": true, 00:14:08.570 "nvme_admin": false, 00:14:08.570 "nvme_io": false, 00:14:08.570 "nvme_io_md": false, 00:14:08.570 "write_zeroes": true, 00:14:08.570 "zcopy": true, 00:14:08.570 "get_zone_info": false, 00:14:08.570 "zone_management": false, 00:14:08.570 "zone_append": false, 00:14:08.570 "compare": false, 00:14:08.570 "compare_and_write": false, 00:14:08.570 "abort": true, 00:14:08.570 "seek_hole": false, 00:14:08.570 "seek_data": false, 00:14:08.570 "copy": true, 00:14:08.570 "nvme_iov_md": false 00:14:08.570 }, 00:14:08.570 "memory_domains": [ 00:14:08.570 { 00:14:08.570 "dma_device_id": "system", 00:14:08.570 "dma_device_type": 1 00:14:08.570 }, 00:14:08.570 { 00:14:08.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.570 "dma_device_type": 2 00:14:08.570 } 00:14:08.570 ], 00:14:08.570 "driver_specific": {} 00:14:08.570 }' 00:14:08.570 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:08.570 18:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:08.570 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:08.570 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:08.570 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:08.570 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:08.570 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:08.829 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:08.829 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:08.829 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:08.829 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:08.829 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:08.829 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:09.088 [2024-07-15 18:28:54.429694] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:09.088 [2024-07-15 18:28:54.429718] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:09.088 [2024-07-15 18:28:54.429756] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:09.088 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:09.088 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:14:09.088 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:09.088 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:09.088 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:09.088 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:14:09.088 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:09.088 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:09.088 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:09.088 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:09.088 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:09.088 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:09.088 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:09.088 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:09.088 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:09.088 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.088 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.088 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:09.088 "name": "Existed_Raid", 00:14:09.088 "uuid": "90673224-c28a-4eaf-b690-c4b0047432ad", 00:14:09.088 "strip_size_kb": 64, 00:14:09.088 "state": "offline", 00:14:09.088 "raid_level": "concat", 00:14:09.088 "superblock": false, 00:14:09.088 "num_base_bdevs": 3, 00:14:09.088 "num_base_bdevs_discovered": 2, 00:14:09.088 "num_base_bdevs_operational": 2, 00:14:09.088 "base_bdevs_list": [ 00:14:09.088 { 00:14:09.088 "name": null, 00:14:09.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.088 "is_configured": false, 00:14:09.088 "data_offset": 0, 00:14:09.088 "data_size": 65536 00:14:09.088 }, 00:14:09.088 { 00:14:09.088 "name": "BaseBdev2", 00:14:09.088 "uuid": "925ebc65-fbbb-4d62-aee2-eb7f25d8386d", 00:14:09.088 "is_configured": true, 00:14:09.088 "data_offset": 0, 00:14:09.088 "data_size": 65536 00:14:09.088 }, 00:14:09.088 { 00:14:09.088 "name": "BaseBdev3", 00:14:09.088 "uuid": "1fd3c01f-ebd6-4f1f-bc51-088fcaef5441", 00:14:09.088 "is_configured": true, 00:14:09.088 "data_offset": 0, 00:14:09.088 "data_size": 65536 00:14:09.088 } 00:14:09.088 ] 00:14:09.088 }' 00:14:09.088 18:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:09.088 18:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.656 18:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:09.656 18:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:09.656 18:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.656 18:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:09.914 18:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:09.914 18:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:09.914 18:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:10.173 [2024-07-15 18:28:55.670087] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:10.173 18:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:10.173 18:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:10.173 18:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.173 18:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:10.431 18:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:10.431 18:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:10.431 18:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:10.690 [2024-07-15 18:28:56.194118] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:10.690 [2024-07-15 18:28:56.194158] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xcb3360 name Existed_Raid, state offline 00:14:10.690 18:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:10.690 18:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:10.690 18:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.690 18:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:10.949 18:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:10.949 18:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:10.949 18:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:14:10.949 18:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:14:10.949 18:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:10.949 18:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:11.208 BaseBdev2 00:14:11.208 18:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:14:11.208 18:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:11.208 18:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:11.208 18:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:11.208 18:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:11.208 18:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:11.209 18:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:11.471 18:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:11.779 [ 00:14:11.779 { 00:14:11.779 "name": "BaseBdev2", 00:14:11.779 "aliases": [ 00:14:11.779 "e8e6124c-f185-4b3e-a64d-0f0ce3edb9dc" 00:14:11.779 ], 00:14:11.779 "product_name": "Malloc disk", 00:14:11.779 "block_size": 512, 00:14:11.779 "num_blocks": 65536, 00:14:11.779 "uuid": "e8e6124c-f185-4b3e-a64d-0f0ce3edb9dc", 00:14:11.779 "assigned_rate_limits": { 00:14:11.779 "rw_ios_per_sec": 0, 00:14:11.779 "rw_mbytes_per_sec": 0, 00:14:11.779 "r_mbytes_per_sec": 0, 00:14:11.779 "w_mbytes_per_sec": 0 00:14:11.779 }, 00:14:11.779 "claimed": false, 00:14:11.779 "zoned": false, 00:14:11.779 "supported_io_types": { 00:14:11.779 "read": true, 00:14:11.779 "write": true, 00:14:11.779 "unmap": true, 00:14:11.779 "flush": true, 00:14:11.779 "reset": true, 00:14:11.779 "nvme_admin": false, 00:14:11.779 "nvme_io": false, 00:14:11.779 "nvme_io_md": false, 00:14:11.779 "write_zeroes": true, 00:14:11.779 "zcopy": true, 00:14:11.779 "get_zone_info": false, 00:14:11.779 "zone_management": false, 00:14:11.779 "zone_append": false, 00:14:11.779 "compare": false, 00:14:11.779 "compare_and_write": false, 00:14:11.779 "abort": true, 00:14:11.779 "seek_hole": false, 00:14:11.779 "seek_data": false, 00:14:11.779 "copy": true, 00:14:11.779 "nvme_iov_md": false 00:14:11.779 }, 00:14:11.779 "memory_domains": [ 00:14:11.779 { 00:14:11.779 "dma_device_id": "system", 00:14:11.779 "dma_device_type": 1 00:14:11.779 }, 00:14:11.779 { 00:14:11.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.779 "dma_device_type": 2 00:14:11.779 } 00:14:11.779 ], 00:14:11.779 "driver_specific": {} 00:14:11.779 } 00:14:11.779 ] 00:14:11.779 18:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:11.779 18:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:11.779 18:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:11.779 18:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:12.044 BaseBdev3 00:14:12.044 18:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:14:12.044 18:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:12.044 18:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:12.044 18:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:12.044 18:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:12.044 18:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:12.044 18:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:12.303 18:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:12.562 [ 00:14:12.562 { 00:14:12.562 "name": "BaseBdev3", 00:14:12.562 "aliases": [ 00:14:12.562 "1859fa51-f249-4796-a550-2492f8537126" 00:14:12.562 ], 00:14:12.562 "product_name": "Malloc disk", 00:14:12.562 "block_size": 512, 00:14:12.562 "num_blocks": 65536, 00:14:12.562 "uuid": "1859fa51-f249-4796-a550-2492f8537126", 00:14:12.562 "assigned_rate_limits": { 00:14:12.562 "rw_ios_per_sec": 0, 00:14:12.562 "rw_mbytes_per_sec": 0, 00:14:12.562 "r_mbytes_per_sec": 0, 00:14:12.562 "w_mbytes_per_sec": 0 00:14:12.562 }, 00:14:12.562 "claimed": false, 00:14:12.562 "zoned": false, 00:14:12.562 "supported_io_types": { 00:14:12.562 "read": true, 00:14:12.562 "write": true, 00:14:12.562 "unmap": true, 00:14:12.562 "flush": true, 00:14:12.562 "reset": true, 00:14:12.562 "nvme_admin": false, 00:14:12.562 "nvme_io": false, 00:14:12.562 "nvme_io_md": false, 00:14:12.562 "write_zeroes": true, 00:14:12.562 "zcopy": true, 00:14:12.562 "get_zone_info": false, 00:14:12.562 "zone_management": false, 00:14:12.562 "zone_append": false, 00:14:12.562 "compare": false, 00:14:12.562 "compare_and_write": false, 00:14:12.562 "abort": true, 00:14:12.562 "seek_hole": false, 00:14:12.562 "seek_data": false, 00:14:12.562 "copy": true, 00:14:12.562 "nvme_iov_md": false 00:14:12.562 }, 00:14:12.562 "memory_domains": [ 00:14:12.562 { 00:14:12.562 "dma_device_id": "system", 00:14:12.562 "dma_device_type": 1 00:14:12.562 }, 00:14:12.562 { 00:14:12.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.562 "dma_device_type": 2 00:14:12.562 } 00:14:12.562 ], 00:14:12.562 "driver_specific": {} 00:14:12.562 } 00:14:12.562 ] 00:14:12.562 18:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:12.562 18:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:12.562 18:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:12.562 18:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:12.821 [2024-07-15 18:28:58.222923] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:12.821 [2024-07-15 18:28:58.222966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:12.821 [2024-07-15 18:28:58.222986] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:12.821 [2024-07-15 18:28:58.224408] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:12.821 18:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:12.821 18:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:12.821 18:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:12.821 18:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:12.821 18:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:12.821 18:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:12.821 18:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:12.821 18:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:12.821 18:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:12.821 18:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:12.821 18:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.821 18:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.080 18:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:13.080 "name": "Existed_Raid", 00:14:13.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.080 "strip_size_kb": 64, 00:14:13.080 "state": "configuring", 00:14:13.080 "raid_level": "concat", 00:14:13.080 "superblock": false, 00:14:13.080 "num_base_bdevs": 3, 00:14:13.080 "num_base_bdevs_discovered": 2, 00:14:13.080 "num_base_bdevs_operational": 3, 00:14:13.080 "base_bdevs_list": [ 00:14:13.080 { 00:14:13.080 "name": "BaseBdev1", 00:14:13.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.080 "is_configured": false, 00:14:13.080 "data_offset": 0, 00:14:13.080 "data_size": 0 00:14:13.080 }, 00:14:13.080 { 00:14:13.080 "name": "BaseBdev2", 00:14:13.080 "uuid": "e8e6124c-f185-4b3e-a64d-0f0ce3edb9dc", 00:14:13.080 "is_configured": true, 00:14:13.080 "data_offset": 0, 00:14:13.080 "data_size": 65536 00:14:13.080 }, 00:14:13.080 { 00:14:13.080 "name": "BaseBdev3", 00:14:13.080 "uuid": "1859fa51-f249-4796-a550-2492f8537126", 00:14:13.080 "is_configured": true, 00:14:13.080 "data_offset": 0, 00:14:13.080 "data_size": 65536 00:14:13.080 } 00:14:13.080 ] 00:14:13.080 }' 00:14:13.080 18:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:13.080 18:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.648 18:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:13.907 [2024-07-15 18:28:59.349922] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:13.907 18:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:13.907 18:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:13.907 18:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:13.907 18:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:13.907 18:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:13.907 18:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:13.907 18:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:13.907 18:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:13.907 18:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:13.907 18:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:13.907 18:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.907 18:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.166 18:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:14.167 "name": "Existed_Raid", 00:14:14.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.167 "strip_size_kb": 64, 00:14:14.167 "state": "configuring", 00:14:14.167 "raid_level": "concat", 00:14:14.167 "superblock": false, 00:14:14.167 "num_base_bdevs": 3, 00:14:14.167 "num_base_bdevs_discovered": 1, 00:14:14.167 "num_base_bdevs_operational": 3, 00:14:14.167 "base_bdevs_list": [ 00:14:14.167 { 00:14:14.167 "name": "BaseBdev1", 00:14:14.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.167 "is_configured": false, 00:14:14.167 "data_offset": 0, 00:14:14.167 "data_size": 0 00:14:14.167 }, 00:14:14.167 { 00:14:14.167 "name": null, 00:14:14.167 "uuid": "e8e6124c-f185-4b3e-a64d-0f0ce3edb9dc", 00:14:14.167 "is_configured": false, 00:14:14.167 "data_offset": 0, 00:14:14.167 "data_size": 65536 00:14:14.167 }, 00:14:14.167 { 00:14:14.167 "name": "BaseBdev3", 00:14:14.167 "uuid": "1859fa51-f249-4796-a550-2492f8537126", 00:14:14.167 "is_configured": true, 00:14:14.167 "data_offset": 0, 00:14:14.167 "data_size": 65536 00:14:14.167 } 00:14:14.167 ] 00:14:14.167 }' 00:14:14.167 18:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:14.167 18:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.735 18:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.735 18:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:14.993 18:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:14:14.993 18:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:15.252 [2024-07-15 18:29:00.737080] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:15.252 BaseBdev1 00:14:15.252 18:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:14:15.252 18:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:15.252 18:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:15.252 18:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:15.252 18:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:15.252 18:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:15.252 18:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:15.511 18:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:15.769 [ 00:14:15.769 { 00:14:15.769 "name": "BaseBdev1", 00:14:15.769 "aliases": [ 00:14:15.769 "08e3b04c-eee2-468c-8665-1b9ac21fdc6a" 00:14:15.769 ], 00:14:15.769 "product_name": "Malloc disk", 00:14:15.769 "block_size": 512, 00:14:15.769 "num_blocks": 65536, 00:14:15.769 "uuid": "08e3b04c-eee2-468c-8665-1b9ac21fdc6a", 00:14:15.769 "assigned_rate_limits": { 00:14:15.769 "rw_ios_per_sec": 0, 00:14:15.769 "rw_mbytes_per_sec": 0, 00:14:15.769 "r_mbytes_per_sec": 0, 00:14:15.769 "w_mbytes_per_sec": 0 00:14:15.769 }, 00:14:15.769 "claimed": true, 00:14:15.769 "claim_type": "exclusive_write", 00:14:15.769 "zoned": false, 00:14:15.769 "supported_io_types": { 00:14:15.769 "read": true, 00:14:15.769 "write": true, 00:14:15.769 "unmap": true, 00:14:15.769 "flush": true, 00:14:15.769 "reset": true, 00:14:15.769 "nvme_admin": false, 00:14:15.769 "nvme_io": false, 00:14:15.769 "nvme_io_md": false, 00:14:15.769 "write_zeroes": true, 00:14:15.769 "zcopy": true, 00:14:15.769 "get_zone_info": false, 00:14:15.769 "zone_management": false, 00:14:15.769 "zone_append": false, 00:14:15.769 "compare": false, 00:14:15.769 "compare_and_write": false, 00:14:15.769 "abort": true, 00:14:15.769 "seek_hole": false, 00:14:15.769 "seek_data": false, 00:14:15.769 "copy": true, 00:14:15.769 "nvme_iov_md": false 00:14:15.769 }, 00:14:15.769 "memory_domains": [ 00:14:15.769 { 00:14:15.769 "dma_device_id": "system", 00:14:15.769 "dma_device_type": 1 00:14:15.769 }, 00:14:15.769 { 00:14:15.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.769 "dma_device_type": 2 00:14:15.769 } 00:14:15.769 ], 00:14:15.769 "driver_specific": {} 00:14:15.769 } 00:14:15.769 ] 00:14:15.769 18:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:15.769 18:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:15.769 18:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:15.769 18:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:15.769 18:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:15.769 18:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:15.769 18:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:15.769 18:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:15.769 18:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:15.769 18:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:15.769 18:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:15.769 18:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.769 18:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.028 18:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:16.028 "name": "Existed_Raid", 00:14:16.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.028 "strip_size_kb": 64, 00:14:16.028 "state": "configuring", 00:14:16.028 "raid_level": "concat", 00:14:16.028 "superblock": false, 00:14:16.028 "num_base_bdevs": 3, 00:14:16.028 "num_base_bdevs_discovered": 2, 00:14:16.028 "num_base_bdevs_operational": 3, 00:14:16.028 "base_bdevs_list": [ 00:14:16.028 { 00:14:16.028 "name": "BaseBdev1", 00:14:16.028 "uuid": "08e3b04c-eee2-468c-8665-1b9ac21fdc6a", 00:14:16.028 "is_configured": true, 00:14:16.028 "data_offset": 0, 00:14:16.028 "data_size": 65536 00:14:16.028 }, 00:14:16.028 { 00:14:16.028 "name": null, 00:14:16.028 "uuid": "e8e6124c-f185-4b3e-a64d-0f0ce3edb9dc", 00:14:16.028 "is_configured": false, 00:14:16.028 "data_offset": 0, 00:14:16.028 "data_size": 65536 00:14:16.028 }, 00:14:16.028 { 00:14:16.028 "name": "BaseBdev3", 00:14:16.028 "uuid": "1859fa51-f249-4796-a550-2492f8537126", 00:14:16.028 "is_configured": true, 00:14:16.028 "data_offset": 0, 00:14:16.028 "data_size": 65536 00:14:16.028 } 00:14:16.028 ] 00:14:16.028 }' 00:14:16.028 18:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:16.028 18:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.593 18:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.593 18:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:16.852 18:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:14:16.852 18:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:17.110 [2024-07-15 18:29:02.529937] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:17.110 18:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:17.110 18:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:17.110 18:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:17.110 18:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:17.110 18:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:17.110 18:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:17.110 18:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:17.110 18:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:17.110 18:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:17.110 18:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:17.110 18:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.110 18:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.367 18:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:17.367 "name": "Existed_Raid", 00:14:17.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.367 "strip_size_kb": 64, 00:14:17.367 "state": "configuring", 00:14:17.367 "raid_level": "concat", 00:14:17.367 "superblock": false, 00:14:17.367 "num_base_bdevs": 3, 00:14:17.367 "num_base_bdevs_discovered": 1, 00:14:17.367 "num_base_bdevs_operational": 3, 00:14:17.367 "base_bdevs_list": [ 00:14:17.367 { 00:14:17.367 "name": "BaseBdev1", 00:14:17.367 "uuid": "08e3b04c-eee2-468c-8665-1b9ac21fdc6a", 00:14:17.367 "is_configured": true, 00:14:17.367 "data_offset": 0, 00:14:17.367 "data_size": 65536 00:14:17.367 }, 00:14:17.367 { 00:14:17.367 "name": null, 00:14:17.367 "uuid": "e8e6124c-f185-4b3e-a64d-0f0ce3edb9dc", 00:14:17.367 "is_configured": false, 00:14:17.367 "data_offset": 0, 00:14:17.367 "data_size": 65536 00:14:17.367 }, 00:14:17.367 { 00:14:17.367 "name": null, 00:14:17.367 "uuid": "1859fa51-f249-4796-a550-2492f8537126", 00:14:17.367 "is_configured": false, 00:14:17.367 "data_offset": 0, 00:14:17.367 "data_size": 65536 00:14:17.367 } 00:14:17.367 ] 00:14:17.367 }' 00:14:17.367 18:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:17.367 18:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.930 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.930 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:18.188 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:14:18.188 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:18.446 [2024-07-15 18:29:03.921708] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:18.446 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:18.446 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:18.447 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:18.447 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:18.447 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:18.447 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:18.447 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:18.447 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:18.447 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:18.447 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:18.447 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:18.447 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.705 18:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:18.705 "name": "Existed_Raid", 00:14:18.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.705 "strip_size_kb": 64, 00:14:18.705 "state": "configuring", 00:14:18.705 "raid_level": "concat", 00:14:18.705 "superblock": false, 00:14:18.705 "num_base_bdevs": 3, 00:14:18.705 "num_base_bdevs_discovered": 2, 00:14:18.705 "num_base_bdevs_operational": 3, 00:14:18.705 "base_bdevs_list": [ 00:14:18.705 { 00:14:18.705 "name": "BaseBdev1", 00:14:18.705 "uuid": "08e3b04c-eee2-468c-8665-1b9ac21fdc6a", 00:14:18.705 "is_configured": true, 00:14:18.705 "data_offset": 0, 00:14:18.705 "data_size": 65536 00:14:18.705 }, 00:14:18.705 { 00:14:18.705 "name": null, 00:14:18.705 "uuid": "e8e6124c-f185-4b3e-a64d-0f0ce3edb9dc", 00:14:18.705 "is_configured": false, 00:14:18.705 "data_offset": 0, 00:14:18.705 "data_size": 65536 00:14:18.705 }, 00:14:18.705 { 00:14:18.705 "name": "BaseBdev3", 00:14:18.705 "uuid": "1859fa51-f249-4796-a550-2492f8537126", 00:14:18.705 "is_configured": true, 00:14:18.705 "data_offset": 0, 00:14:18.705 "data_size": 65536 00:14:18.705 } 00:14:18.705 ] 00:14:18.705 }' 00:14:18.705 18:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:18.705 18:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.642 18:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.642 18:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:19.642 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:14:19.642 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:19.901 [2024-07-15 18:29:05.325508] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:19.901 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:19.901 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:19.901 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:19.901 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:19.901 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:19.901 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:19.901 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:19.901 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:19.901 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:19.901 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:19.901 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.901 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.160 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:20.160 "name": "Existed_Raid", 00:14:20.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.160 "strip_size_kb": 64, 00:14:20.160 "state": "configuring", 00:14:20.160 "raid_level": "concat", 00:14:20.160 "superblock": false, 00:14:20.160 "num_base_bdevs": 3, 00:14:20.160 "num_base_bdevs_discovered": 1, 00:14:20.160 "num_base_bdevs_operational": 3, 00:14:20.160 "base_bdevs_list": [ 00:14:20.160 { 00:14:20.160 "name": null, 00:14:20.160 "uuid": "08e3b04c-eee2-468c-8665-1b9ac21fdc6a", 00:14:20.160 "is_configured": false, 00:14:20.160 "data_offset": 0, 00:14:20.160 "data_size": 65536 00:14:20.160 }, 00:14:20.160 { 00:14:20.160 "name": null, 00:14:20.160 "uuid": "e8e6124c-f185-4b3e-a64d-0f0ce3edb9dc", 00:14:20.160 "is_configured": false, 00:14:20.160 "data_offset": 0, 00:14:20.160 "data_size": 65536 00:14:20.160 }, 00:14:20.160 { 00:14:20.160 "name": "BaseBdev3", 00:14:20.160 "uuid": "1859fa51-f249-4796-a550-2492f8537126", 00:14:20.160 "is_configured": true, 00:14:20.160 "data_offset": 0, 00:14:20.160 "data_size": 65536 00:14:20.160 } 00:14:20.160 ] 00:14:20.160 }' 00:14:20.160 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:20.160 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.097 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.098 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:21.098 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:14:21.098 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:21.356 [2024-07-15 18:29:06.735886] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:21.356 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:21.356 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:21.356 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:21.356 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:21.357 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:21.357 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:21.357 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:21.357 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:21.357 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:21.357 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:21.357 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.357 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.615 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:21.616 "name": "Existed_Raid", 00:14:21.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.616 "strip_size_kb": 64, 00:14:21.616 "state": "configuring", 00:14:21.616 "raid_level": "concat", 00:14:21.616 "superblock": false, 00:14:21.616 "num_base_bdevs": 3, 00:14:21.616 "num_base_bdevs_discovered": 2, 00:14:21.616 "num_base_bdevs_operational": 3, 00:14:21.616 "base_bdevs_list": [ 00:14:21.616 { 00:14:21.616 "name": null, 00:14:21.616 "uuid": "08e3b04c-eee2-468c-8665-1b9ac21fdc6a", 00:14:21.616 "is_configured": false, 00:14:21.616 "data_offset": 0, 00:14:21.616 "data_size": 65536 00:14:21.616 }, 00:14:21.616 { 00:14:21.616 "name": "BaseBdev2", 00:14:21.616 "uuid": "e8e6124c-f185-4b3e-a64d-0f0ce3edb9dc", 00:14:21.616 "is_configured": true, 00:14:21.616 "data_offset": 0, 00:14:21.616 "data_size": 65536 00:14:21.616 }, 00:14:21.616 { 00:14:21.616 "name": "BaseBdev3", 00:14:21.616 "uuid": "1859fa51-f249-4796-a550-2492f8537126", 00:14:21.616 "is_configured": true, 00:14:21.616 "data_offset": 0, 00:14:21.616 "data_size": 65536 00:14:21.616 } 00:14:21.616 ] 00:14:21.616 }' 00:14:21.616 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:21.616 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.182 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.182 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:22.441 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:14:22.441 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.441 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:23.007 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 08e3b04c-eee2-468c-8665-1b9ac21fdc6a 00:14:23.265 [2024-07-15 18:29:08.740651] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:23.265 [2024-07-15 18:29:08.740683] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xe5bcf0 00:14:23.265 [2024-07-15 18:29:08.740689] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:23.265 [2024-07-15 18:29:08.740892] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe57ee0 00:14:23.265 [2024-07-15 18:29:08.741020] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe5bcf0 00:14:23.265 [2024-07-15 18:29:08.741028] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xe5bcf0 00:14:23.265 [2024-07-15 18:29:08.741186] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.265 NewBaseBdev 00:14:23.265 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:14:23.266 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:14:23.266 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:23.266 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:23.266 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:23.266 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:23.266 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:23.833 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:24.399 [ 00:14:24.399 { 00:14:24.399 "name": "NewBaseBdev", 00:14:24.399 "aliases": [ 00:14:24.399 "08e3b04c-eee2-468c-8665-1b9ac21fdc6a" 00:14:24.399 ], 00:14:24.399 "product_name": "Malloc disk", 00:14:24.399 "block_size": 512, 00:14:24.399 "num_blocks": 65536, 00:14:24.399 "uuid": "08e3b04c-eee2-468c-8665-1b9ac21fdc6a", 00:14:24.399 "assigned_rate_limits": { 00:14:24.399 "rw_ios_per_sec": 0, 00:14:24.399 "rw_mbytes_per_sec": 0, 00:14:24.399 "r_mbytes_per_sec": 0, 00:14:24.399 "w_mbytes_per_sec": 0 00:14:24.399 }, 00:14:24.399 "claimed": true, 00:14:24.399 "claim_type": "exclusive_write", 00:14:24.399 "zoned": false, 00:14:24.399 "supported_io_types": { 00:14:24.399 "read": true, 00:14:24.399 "write": true, 00:14:24.399 "unmap": true, 00:14:24.399 "flush": true, 00:14:24.399 "reset": true, 00:14:24.399 "nvme_admin": false, 00:14:24.399 "nvme_io": false, 00:14:24.399 "nvme_io_md": false, 00:14:24.399 "write_zeroes": true, 00:14:24.399 "zcopy": true, 00:14:24.399 "get_zone_info": false, 00:14:24.399 "zone_management": false, 00:14:24.399 "zone_append": false, 00:14:24.399 "compare": false, 00:14:24.399 "compare_and_write": false, 00:14:24.399 "abort": true, 00:14:24.399 "seek_hole": false, 00:14:24.399 "seek_data": false, 00:14:24.399 "copy": true, 00:14:24.399 "nvme_iov_md": false 00:14:24.399 }, 00:14:24.399 "memory_domains": [ 00:14:24.399 { 00:14:24.399 "dma_device_id": "system", 00:14:24.399 "dma_device_type": 1 00:14:24.399 }, 00:14:24.399 { 00:14:24.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.399 "dma_device_type": 2 00:14:24.399 } 00:14:24.399 ], 00:14:24.399 "driver_specific": {} 00:14:24.399 } 00:14:24.399 ] 00:14:24.399 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:24.399 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:14:24.399 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:24.399 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:24.399 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:24.399 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:24.399 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:24.399 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:24.399 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:24.399 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:24.399 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:24.399 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.399 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.966 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:24.966 "name": "Existed_Raid", 00:14:24.966 "uuid": "7c648ce1-6e2d-4e5d-8a49-0365adbc6969", 00:14:24.966 "strip_size_kb": 64, 00:14:24.966 "state": "online", 00:14:24.966 "raid_level": "concat", 00:14:24.966 "superblock": false, 00:14:24.966 "num_base_bdevs": 3, 00:14:24.966 "num_base_bdevs_discovered": 3, 00:14:24.966 "num_base_bdevs_operational": 3, 00:14:24.966 "base_bdevs_list": [ 00:14:24.966 { 00:14:24.966 "name": "NewBaseBdev", 00:14:24.966 "uuid": "08e3b04c-eee2-468c-8665-1b9ac21fdc6a", 00:14:24.966 "is_configured": true, 00:14:24.966 "data_offset": 0, 00:14:24.966 "data_size": 65536 00:14:24.966 }, 00:14:24.966 { 00:14:24.966 "name": "BaseBdev2", 00:14:24.966 "uuid": "e8e6124c-f185-4b3e-a64d-0f0ce3edb9dc", 00:14:24.966 "is_configured": true, 00:14:24.966 "data_offset": 0, 00:14:24.966 "data_size": 65536 00:14:24.966 }, 00:14:24.966 { 00:14:24.966 "name": "BaseBdev3", 00:14:24.966 "uuid": "1859fa51-f249-4796-a550-2492f8537126", 00:14:24.966 "is_configured": true, 00:14:24.966 "data_offset": 0, 00:14:24.966 "data_size": 65536 00:14:24.966 } 00:14:24.966 ] 00:14:24.966 }' 00:14:24.966 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:24.966 18:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.533 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:14:25.533 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:25.533 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:25.533 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:25.533 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:25.533 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:25.533 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:25.533 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:26.100 [2024-07-15 18:29:11.364051] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:26.100 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:26.100 "name": "Existed_Raid", 00:14:26.100 "aliases": [ 00:14:26.100 "7c648ce1-6e2d-4e5d-8a49-0365adbc6969" 00:14:26.100 ], 00:14:26.100 "product_name": "Raid Volume", 00:14:26.100 "block_size": 512, 00:14:26.100 "num_blocks": 196608, 00:14:26.100 "uuid": "7c648ce1-6e2d-4e5d-8a49-0365adbc6969", 00:14:26.100 "assigned_rate_limits": { 00:14:26.100 "rw_ios_per_sec": 0, 00:14:26.100 "rw_mbytes_per_sec": 0, 00:14:26.100 "r_mbytes_per_sec": 0, 00:14:26.100 "w_mbytes_per_sec": 0 00:14:26.100 }, 00:14:26.100 "claimed": false, 00:14:26.100 "zoned": false, 00:14:26.100 "supported_io_types": { 00:14:26.100 "read": true, 00:14:26.100 "write": true, 00:14:26.100 "unmap": true, 00:14:26.100 "flush": true, 00:14:26.100 "reset": true, 00:14:26.100 "nvme_admin": false, 00:14:26.100 "nvme_io": false, 00:14:26.100 "nvme_io_md": false, 00:14:26.100 "write_zeroes": true, 00:14:26.100 "zcopy": false, 00:14:26.100 "get_zone_info": false, 00:14:26.100 "zone_management": false, 00:14:26.100 "zone_append": false, 00:14:26.100 "compare": false, 00:14:26.100 "compare_and_write": false, 00:14:26.100 "abort": false, 00:14:26.100 "seek_hole": false, 00:14:26.100 "seek_data": false, 00:14:26.100 "copy": false, 00:14:26.100 "nvme_iov_md": false 00:14:26.100 }, 00:14:26.100 "memory_domains": [ 00:14:26.100 { 00:14:26.100 "dma_device_id": "system", 00:14:26.100 "dma_device_type": 1 00:14:26.100 }, 00:14:26.100 { 00:14:26.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.100 "dma_device_type": 2 00:14:26.100 }, 00:14:26.100 { 00:14:26.100 "dma_device_id": "system", 00:14:26.100 "dma_device_type": 1 00:14:26.100 }, 00:14:26.100 { 00:14:26.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.100 "dma_device_type": 2 00:14:26.100 }, 00:14:26.100 { 00:14:26.100 "dma_device_id": "system", 00:14:26.100 "dma_device_type": 1 00:14:26.100 }, 00:14:26.100 { 00:14:26.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.100 "dma_device_type": 2 00:14:26.100 } 00:14:26.100 ], 00:14:26.100 "driver_specific": { 00:14:26.100 "raid": { 00:14:26.100 "uuid": "7c648ce1-6e2d-4e5d-8a49-0365adbc6969", 00:14:26.100 "strip_size_kb": 64, 00:14:26.100 "state": "online", 00:14:26.100 "raid_level": "concat", 00:14:26.100 "superblock": false, 00:14:26.100 "num_base_bdevs": 3, 00:14:26.100 "num_base_bdevs_discovered": 3, 00:14:26.100 "num_base_bdevs_operational": 3, 00:14:26.100 "base_bdevs_list": [ 00:14:26.100 { 00:14:26.100 "name": "NewBaseBdev", 00:14:26.100 "uuid": "08e3b04c-eee2-468c-8665-1b9ac21fdc6a", 00:14:26.100 "is_configured": true, 00:14:26.100 "data_offset": 0, 00:14:26.100 "data_size": 65536 00:14:26.100 }, 00:14:26.100 { 00:14:26.100 "name": "BaseBdev2", 00:14:26.100 "uuid": "e8e6124c-f185-4b3e-a64d-0f0ce3edb9dc", 00:14:26.100 "is_configured": true, 00:14:26.100 "data_offset": 0, 00:14:26.100 "data_size": 65536 00:14:26.100 }, 00:14:26.100 { 00:14:26.100 "name": "BaseBdev3", 00:14:26.100 "uuid": "1859fa51-f249-4796-a550-2492f8537126", 00:14:26.100 "is_configured": true, 00:14:26.100 "data_offset": 0, 00:14:26.100 "data_size": 65536 00:14:26.100 } 00:14:26.100 ] 00:14:26.100 } 00:14:26.100 } 00:14:26.100 }' 00:14:26.100 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:26.100 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:14:26.100 BaseBdev2 00:14:26.100 BaseBdev3' 00:14:26.100 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:26.100 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:26.100 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:26.392 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:26.392 "name": "NewBaseBdev", 00:14:26.392 "aliases": [ 00:14:26.392 "08e3b04c-eee2-468c-8665-1b9ac21fdc6a" 00:14:26.392 ], 00:14:26.392 "product_name": "Malloc disk", 00:14:26.392 "block_size": 512, 00:14:26.392 "num_blocks": 65536, 00:14:26.392 "uuid": "08e3b04c-eee2-468c-8665-1b9ac21fdc6a", 00:14:26.392 "assigned_rate_limits": { 00:14:26.392 "rw_ios_per_sec": 0, 00:14:26.392 "rw_mbytes_per_sec": 0, 00:14:26.392 "r_mbytes_per_sec": 0, 00:14:26.392 "w_mbytes_per_sec": 0 00:14:26.392 }, 00:14:26.392 "claimed": true, 00:14:26.392 "claim_type": "exclusive_write", 00:14:26.392 "zoned": false, 00:14:26.392 "supported_io_types": { 00:14:26.392 "read": true, 00:14:26.392 "write": true, 00:14:26.392 "unmap": true, 00:14:26.392 "flush": true, 00:14:26.392 "reset": true, 00:14:26.392 "nvme_admin": false, 00:14:26.392 "nvme_io": false, 00:14:26.392 "nvme_io_md": false, 00:14:26.392 "write_zeroes": true, 00:14:26.392 "zcopy": true, 00:14:26.392 "get_zone_info": false, 00:14:26.392 "zone_management": false, 00:14:26.392 "zone_append": false, 00:14:26.392 "compare": false, 00:14:26.392 "compare_and_write": false, 00:14:26.392 "abort": true, 00:14:26.392 "seek_hole": false, 00:14:26.392 "seek_data": false, 00:14:26.392 "copy": true, 00:14:26.392 "nvme_iov_md": false 00:14:26.392 }, 00:14:26.392 "memory_domains": [ 00:14:26.392 { 00:14:26.392 "dma_device_id": "system", 00:14:26.392 "dma_device_type": 1 00:14:26.392 }, 00:14:26.392 { 00:14:26.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.392 "dma_device_type": 2 00:14:26.392 } 00:14:26.392 ], 00:14:26.392 "driver_specific": {} 00:14:26.392 }' 00:14:26.392 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:26.392 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:26.392 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:26.392 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:26.392 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:26.392 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:26.392 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:26.650 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:26.650 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:26.651 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:26.651 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:26.651 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:26.651 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:26.651 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:26.651 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:26.909 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:26.909 "name": "BaseBdev2", 00:14:26.909 "aliases": [ 00:14:26.909 "e8e6124c-f185-4b3e-a64d-0f0ce3edb9dc" 00:14:26.909 ], 00:14:26.909 "product_name": "Malloc disk", 00:14:26.909 "block_size": 512, 00:14:26.909 "num_blocks": 65536, 00:14:26.909 "uuid": "e8e6124c-f185-4b3e-a64d-0f0ce3edb9dc", 00:14:26.909 "assigned_rate_limits": { 00:14:26.909 "rw_ios_per_sec": 0, 00:14:26.909 "rw_mbytes_per_sec": 0, 00:14:26.909 "r_mbytes_per_sec": 0, 00:14:26.909 "w_mbytes_per_sec": 0 00:14:26.909 }, 00:14:26.909 "claimed": true, 00:14:26.909 "claim_type": "exclusive_write", 00:14:26.909 "zoned": false, 00:14:26.909 "supported_io_types": { 00:14:26.909 "read": true, 00:14:26.909 "write": true, 00:14:26.909 "unmap": true, 00:14:26.909 "flush": true, 00:14:26.909 "reset": true, 00:14:26.909 "nvme_admin": false, 00:14:26.909 "nvme_io": false, 00:14:26.909 "nvme_io_md": false, 00:14:26.909 "write_zeroes": true, 00:14:26.909 "zcopy": true, 00:14:26.909 "get_zone_info": false, 00:14:26.909 "zone_management": false, 00:14:26.909 "zone_append": false, 00:14:26.909 "compare": false, 00:14:26.909 "compare_and_write": false, 00:14:26.909 "abort": true, 00:14:26.909 "seek_hole": false, 00:14:26.909 "seek_data": false, 00:14:26.909 "copy": true, 00:14:26.909 "nvme_iov_md": false 00:14:26.909 }, 00:14:26.909 "memory_domains": [ 00:14:26.909 { 00:14:26.909 "dma_device_id": "system", 00:14:26.909 "dma_device_type": 1 00:14:26.909 }, 00:14:26.909 { 00:14:26.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.909 "dma_device_type": 2 00:14:26.909 } 00:14:26.909 ], 00:14:26.909 "driver_specific": {} 00:14:26.909 }' 00:14:26.909 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:26.909 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:26.909 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:26.909 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:26.909 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:27.169 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:27.169 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:27.169 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:27.169 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:27.169 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:27.169 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:27.427 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:27.427 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:27.427 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:27.427 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:27.686 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:27.686 "name": "BaseBdev3", 00:14:27.686 "aliases": [ 00:14:27.686 "1859fa51-f249-4796-a550-2492f8537126" 00:14:27.686 ], 00:14:27.686 "product_name": "Malloc disk", 00:14:27.686 "block_size": 512, 00:14:27.686 "num_blocks": 65536, 00:14:27.686 "uuid": "1859fa51-f249-4796-a550-2492f8537126", 00:14:27.686 "assigned_rate_limits": { 00:14:27.686 "rw_ios_per_sec": 0, 00:14:27.686 "rw_mbytes_per_sec": 0, 00:14:27.686 "r_mbytes_per_sec": 0, 00:14:27.686 "w_mbytes_per_sec": 0 00:14:27.686 }, 00:14:27.686 "claimed": true, 00:14:27.686 "claim_type": "exclusive_write", 00:14:27.686 "zoned": false, 00:14:27.686 "supported_io_types": { 00:14:27.686 "read": true, 00:14:27.686 "write": true, 00:14:27.686 "unmap": true, 00:14:27.686 "flush": true, 00:14:27.686 "reset": true, 00:14:27.686 "nvme_admin": false, 00:14:27.686 "nvme_io": false, 00:14:27.686 "nvme_io_md": false, 00:14:27.686 "write_zeroes": true, 00:14:27.686 "zcopy": true, 00:14:27.686 "get_zone_info": false, 00:14:27.686 "zone_management": false, 00:14:27.686 "zone_append": false, 00:14:27.686 "compare": false, 00:14:27.686 "compare_and_write": false, 00:14:27.686 "abort": true, 00:14:27.686 "seek_hole": false, 00:14:27.686 "seek_data": false, 00:14:27.686 "copy": true, 00:14:27.686 "nvme_iov_md": false 00:14:27.686 }, 00:14:27.686 "memory_domains": [ 00:14:27.686 { 00:14:27.686 "dma_device_id": "system", 00:14:27.686 "dma_device_type": 1 00:14:27.686 }, 00:14:27.686 { 00:14:27.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.686 "dma_device_type": 2 00:14:27.686 } 00:14:27.686 ], 00:14:27.686 "driver_specific": {} 00:14:27.686 }' 00:14:27.686 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:27.686 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:27.686 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:27.686 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:27.686 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:27.686 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:27.686 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:27.943 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:27.943 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:27.943 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:27.943 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:28.202 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:28.202 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:28.462 [2024-07-15 18:29:13.962764] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:28.462 [2024-07-15 18:29:13.962790] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:28.462 [2024-07-15 18:29:13.962843] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:28.462 [2024-07-15 18:29:13.962893] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:28.462 [2024-07-15 18:29:13.962902] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe5bcf0 name Existed_Raid, state offline 00:14:28.462 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2797857 00:14:28.462 18:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2797857 ']' 00:14:28.462 18:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2797857 00:14:28.462 18:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:14:28.463 18:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:28.463 18:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2797857 00:14:28.721 18:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:28.721 18:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:28.721 18:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2797857' 00:14:28.721 killing process with pid 2797857 00:14:28.721 18:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2797857 00:14:28.721 [2024-07-15 18:29:14.035437] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:28.721 18:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2797857 00:14:28.721 [2024-07-15 18:29:14.061335] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:28.721 18:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:14:28.722 00:14:28.722 real 0m30.991s 00:14:28.722 user 0m58.671s 00:14:28.722 sys 0m4.254s 00:14:28.722 18:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:28.722 18:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.722 ************************************ 00:14:28.722 END TEST raid_state_function_test 00:14:28.722 ************************************ 00:14:28.980 18:29:14 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:28.981 18:29:14 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:14:28.981 18:29:14 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:28.981 18:29:14 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:28.981 18:29:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:28.981 ************************************ 00:14:28.981 START TEST raid_state_function_test_sb 00:14:28.981 ************************************ 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 true 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2803348 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2803348' 00:14:28.981 Process raid pid: 2803348 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2803348 /var/tmp/spdk-raid.sock 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2803348 ']' 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:28.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:28.981 18:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.981 [2024-07-15 18:29:14.358710] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:14:28.981 [2024-07-15 18:29:14.358771] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.981 [2024-07-15 18:29:14.451110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.240 [2024-07-15 18:29:14.545900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.240 [2024-07-15 18:29:14.601698] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.240 [2024-07-15 18:29:14.601722] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.240 18:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:29.240 18:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:14:29.240 18:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:29.807 [2024-07-15 18:29:15.119120] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:29.807 [2024-07-15 18:29:15.119160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:29.807 [2024-07-15 18:29:15.119169] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:29.807 [2024-07-15 18:29:15.119178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:29.807 [2024-07-15 18:29:15.119188] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:29.807 [2024-07-15 18:29:15.119196] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:29.807 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:29.807 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:29.807 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:29.807 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:29.807 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:29.807 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:29.807 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:29.807 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:29.807 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:29.807 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:29.807 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:29.807 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.375 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:30.375 "name": "Existed_Raid", 00:14:30.375 "uuid": "77a98d86-e48b-41a9-bc0f-1d0693d19094", 00:14:30.375 "strip_size_kb": 64, 00:14:30.375 "state": "configuring", 00:14:30.375 "raid_level": "concat", 00:14:30.375 "superblock": true, 00:14:30.375 "num_base_bdevs": 3, 00:14:30.375 "num_base_bdevs_discovered": 0, 00:14:30.375 "num_base_bdevs_operational": 3, 00:14:30.375 "base_bdevs_list": [ 00:14:30.375 { 00:14:30.375 "name": "BaseBdev1", 00:14:30.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.375 "is_configured": false, 00:14:30.375 "data_offset": 0, 00:14:30.375 "data_size": 0 00:14:30.375 }, 00:14:30.375 { 00:14:30.375 "name": "BaseBdev2", 00:14:30.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.375 "is_configured": false, 00:14:30.375 "data_offset": 0, 00:14:30.375 "data_size": 0 00:14:30.375 }, 00:14:30.375 { 00:14:30.375 "name": "BaseBdev3", 00:14:30.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.375 "is_configured": false, 00:14:30.375 "data_offset": 0, 00:14:30.375 "data_size": 0 00:14:30.375 } 00:14:30.375 ] 00:14:30.375 }' 00:14:30.375 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:30.375 18:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.942 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:30.942 [2024-07-15 18:29:16.474578] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:30.942 [2024-07-15 18:29:16.474608] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23d2ba0 name Existed_Raid, state configuring 00:14:31.201 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:31.201 [2024-07-15 18:29:16.735308] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:31.201 [2024-07-15 18:29:16.735339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:31.201 [2024-07-15 18:29:16.735348] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:31.201 [2024-07-15 18:29:16.735356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:31.201 [2024-07-15 18:29:16.735363] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:31.201 [2024-07-15 18:29:16.735371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:31.460 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:31.460 [2024-07-15 18:29:16.921147] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:31.460 BaseBdev1 00:14:31.460 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:31.460 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:31.460 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:31.460 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:31.460 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:31.460 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:31.460 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:31.718 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:31.977 [ 00:14:31.977 { 00:14:31.977 "name": "BaseBdev1", 00:14:31.977 "aliases": [ 00:14:31.977 "ff22933c-88fb-4001-aade-617a26442f04" 00:14:31.977 ], 00:14:31.977 "product_name": "Malloc disk", 00:14:31.977 "block_size": 512, 00:14:31.977 "num_blocks": 65536, 00:14:31.977 "uuid": "ff22933c-88fb-4001-aade-617a26442f04", 00:14:31.977 "assigned_rate_limits": { 00:14:31.977 "rw_ios_per_sec": 0, 00:14:31.977 "rw_mbytes_per_sec": 0, 00:14:31.977 "r_mbytes_per_sec": 0, 00:14:31.977 "w_mbytes_per_sec": 0 00:14:31.977 }, 00:14:31.977 "claimed": true, 00:14:31.977 "claim_type": "exclusive_write", 00:14:31.977 "zoned": false, 00:14:31.977 "supported_io_types": { 00:14:31.977 "read": true, 00:14:31.977 "write": true, 00:14:31.977 "unmap": true, 00:14:31.977 "flush": true, 00:14:31.977 "reset": true, 00:14:31.977 "nvme_admin": false, 00:14:31.977 "nvme_io": false, 00:14:31.977 "nvme_io_md": false, 00:14:31.977 "write_zeroes": true, 00:14:31.977 "zcopy": true, 00:14:31.977 "get_zone_info": false, 00:14:31.977 "zone_management": false, 00:14:31.977 "zone_append": false, 00:14:31.977 "compare": false, 00:14:31.977 "compare_and_write": false, 00:14:31.977 "abort": true, 00:14:31.977 "seek_hole": false, 00:14:31.977 "seek_data": false, 00:14:31.977 "copy": true, 00:14:31.977 "nvme_iov_md": false 00:14:31.977 }, 00:14:31.977 "memory_domains": [ 00:14:31.977 { 00:14:31.977 "dma_device_id": "system", 00:14:31.977 "dma_device_type": 1 00:14:31.977 }, 00:14:31.977 { 00:14:31.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.977 "dma_device_type": 2 00:14:31.977 } 00:14:31.977 ], 00:14:31.977 "driver_specific": {} 00:14:31.977 } 00:14:31.977 ] 00:14:31.977 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:31.977 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:31.977 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:31.977 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:31.977 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:31.977 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:31.977 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:31.978 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:31.978 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:31.978 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:31.978 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:31.978 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:31.978 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.236 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:32.236 "name": "Existed_Raid", 00:14:32.236 "uuid": "4782b9ea-e5ab-4dbe-ba5f-a7bd66f91bdd", 00:14:32.236 "strip_size_kb": 64, 00:14:32.236 "state": "configuring", 00:14:32.236 "raid_level": "concat", 00:14:32.236 "superblock": true, 00:14:32.236 "num_base_bdevs": 3, 00:14:32.236 "num_base_bdevs_discovered": 1, 00:14:32.236 "num_base_bdevs_operational": 3, 00:14:32.236 "base_bdevs_list": [ 00:14:32.236 { 00:14:32.236 "name": "BaseBdev1", 00:14:32.236 "uuid": "ff22933c-88fb-4001-aade-617a26442f04", 00:14:32.236 "is_configured": true, 00:14:32.236 "data_offset": 2048, 00:14:32.236 "data_size": 63488 00:14:32.236 }, 00:14:32.236 { 00:14:32.236 "name": "BaseBdev2", 00:14:32.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.236 "is_configured": false, 00:14:32.236 "data_offset": 0, 00:14:32.236 "data_size": 0 00:14:32.236 }, 00:14:32.236 { 00:14:32.236 "name": "BaseBdev3", 00:14:32.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.236 "is_configured": false, 00:14:32.236 "data_offset": 0, 00:14:32.236 "data_size": 0 00:14:32.236 } 00:14:32.236 ] 00:14:32.236 }' 00:14:32.236 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:32.236 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.171 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:33.171 [2024-07-15 18:29:18.589636] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:33.171 [2024-07-15 18:29:18.589674] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23d2470 name Existed_Raid, state configuring 00:14:33.171 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:33.430 [2024-07-15 18:29:18.842359] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:33.430 [2024-07-15 18:29:18.843878] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:33.430 [2024-07-15 18:29:18.843909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:33.430 [2024-07-15 18:29:18.843917] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:33.430 [2024-07-15 18:29:18.843926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:33.430 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:33.430 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:33.430 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:33.430 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:33.430 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:33.430 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:33.430 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:33.430 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:33.430 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:33.430 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:33.430 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:33.430 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:33.430 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:33.430 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.688 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:33.688 "name": "Existed_Raid", 00:14:33.688 "uuid": "83ab785b-5c49-472b-8229-d1a78fdb0390", 00:14:33.688 "strip_size_kb": 64, 00:14:33.688 "state": "configuring", 00:14:33.688 "raid_level": "concat", 00:14:33.688 "superblock": true, 00:14:33.688 "num_base_bdevs": 3, 00:14:33.688 "num_base_bdevs_discovered": 1, 00:14:33.688 "num_base_bdevs_operational": 3, 00:14:33.688 "base_bdevs_list": [ 00:14:33.688 { 00:14:33.688 "name": "BaseBdev1", 00:14:33.688 "uuid": "ff22933c-88fb-4001-aade-617a26442f04", 00:14:33.688 "is_configured": true, 00:14:33.688 "data_offset": 2048, 00:14:33.688 "data_size": 63488 00:14:33.688 }, 00:14:33.688 { 00:14:33.688 "name": "BaseBdev2", 00:14:33.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.688 "is_configured": false, 00:14:33.688 "data_offset": 0, 00:14:33.688 "data_size": 0 00:14:33.688 }, 00:14:33.688 { 00:14:33.688 "name": "BaseBdev3", 00:14:33.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.688 "is_configured": false, 00:14:33.688 "data_offset": 0, 00:14:33.688 "data_size": 0 00:14:33.688 } 00:14:33.688 ] 00:14:33.688 }' 00:14:33.688 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:33.688 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.255 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:34.513 [2024-07-15 18:29:19.956484] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:34.513 BaseBdev2 00:14:34.513 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:34.513 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:34.513 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:34.513 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:34.513 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:34.513 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:34.513 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:34.772 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:35.030 [ 00:14:35.030 { 00:14:35.030 "name": "BaseBdev2", 00:14:35.030 "aliases": [ 00:14:35.030 "9c3b5d7e-b351-4a03-9a88-c40d15b50fab" 00:14:35.030 ], 00:14:35.030 "product_name": "Malloc disk", 00:14:35.030 "block_size": 512, 00:14:35.030 "num_blocks": 65536, 00:14:35.030 "uuid": "9c3b5d7e-b351-4a03-9a88-c40d15b50fab", 00:14:35.030 "assigned_rate_limits": { 00:14:35.030 "rw_ios_per_sec": 0, 00:14:35.030 "rw_mbytes_per_sec": 0, 00:14:35.030 "r_mbytes_per_sec": 0, 00:14:35.030 "w_mbytes_per_sec": 0 00:14:35.030 }, 00:14:35.030 "claimed": true, 00:14:35.030 "claim_type": "exclusive_write", 00:14:35.030 "zoned": false, 00:14:35.030 "supported_io_types": { 00:14:35.030 "read": true, 00:14:35.030 "write": true, 00:14:35.030 "unmap": true, 00:14:35.030 "flush": true, 00:14:35.030 "reset": true, 00:14:35.030 "nvme_admin": false, 00:14:35.030 "nvme_io": false, 00:14:35.030 "nvme_io_md": false, 00:14:35.030 "write_zeroes": true, 00:14:35.030 "zcopy": true, 00:14:35.030 "get_zone_info": false, 00:14:35.030 "zone_management": false, 00:14:35.030 "zone_append": false, 00:14:35.030 "compare": false, 00:14:35.030 "compare_and_write": false, 00:14:35.030 "abort": true, 00:14:35.030 "seek_hole": false, 00:14:35.030 "seek_data": false, 00:14:35.030 "copy": true, 00:14:35.030 "nvme_iov_md": false 00:14:35.030 }, 00:14:35.030 "memory_domains": [ 00:14:35.030 { 00:14:35.030 "dma_device_id": "system", 00:14:35.030 "dma_device_type": 1 00:14:35.030 }, 00:14:35.030 { 00:14:35.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.030 "dma_device_type": 2 00:14:35.030 } 00:14:35.030 ], 00:14:35.030 "driver_specific": {} 00:14:35.030 } 00:14:35.030 ] 00:14:35.030 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:35.030 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:35.030 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:35.030 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:35.030 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:35.030 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:35.030 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:35.030 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:35.031 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:35.031 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:35.031 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:35.031 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:35.031 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:35.031 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:35.031 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.289 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:35.289 "name": "Existed_Raid", 00:14:35.289 "uuid": "83ab785b-5c49-472b-8229-d1a78fdb0390", 00:14:35.289 "strip_size_kb": 64, 00:14:35.289 "state": "configuring", 00:14:35.289 "raid_level": "concat", 00:14:35.289 "superblock": true, 00:14:35.289 "num_base_bdevs": 3, 00:14:35.289 "num_base_bdevs_discovered": 2, 00:14:35.289 "num_base_bdevs_operational": 3, 00:14:35.289 "base_bdevs_list": [ 00:14:35.289 { 00:14:35.289 "name": "BaseBdev1", 00:14:35.289 "uuid": "ff22933c-88fb-4001-aade-617a26442f04", 00:14:35.289 "is_configured": true, 00:14:35.289 "data_offset": 2048, 00:14:35.289 "data_size": 63488 00:14:35.289 }, 00:14:35.289 { 00:14:35.289 "name": "BaseBdev2", 00:14:35.289 "uuid": "9c3b5d7e-b351-4a03-9a88-c40d15b50fab", 00:14:35.289 "is_configured": true, 00:14:35.289 "data_offset": 2048, 00:14:35.289 "data_size": 63488 00:14:35.289 }, 00:14:35.289 { 00:14:35.289 "name": "BaseBdev3", 00:14:35.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.289 "is_configured": false, 00:14:35.289 "data_offset": 0, 00:14:35.289 "data_size": 0 00:14:35.289 } 00:14:35.289 ] 00:14:35.289 }' 00:14:35.289 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:35.289 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.855 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:36.114 [2024-07-15 18:29:21.600116] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:36.114 [2024-07-15 18:29:21.600283] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x23d3360 00:14:36.114 [2024-07-15 18:29:21.600298] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:36.114 [2024-07-15 18:29:21.600482] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20d87d0 00:14:36.114 [2024-07-15 18:29:21.600602] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x23d3360 00:14:36.114 [2024-07-15 18:29:21.600610] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x23d3360 00:14:36.114 [2024-07-15 18:29:21.600706] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.114 BaseBdev3 00:14:36.114 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:14:36.114 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:36.114 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:36.114 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:36.114 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:36.114 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:36.114 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:36.372 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:36.630 [ 00:14:36.631 { 00:14:36.631 "name": "BaseBdev3", 00:14:36.631 "aliases": [ 00:14:36.631 "a6281f6e-6c46-451b-9e21-49112c600068" 00:14:36.631 ], 00:14:36.631 "product_name": "Malloc disk", 00:14:36.631 "block_size": 512, 00:14:36.631 "num_blocks": 65536, 00:14:36.631 "uuid": "a6281f6e-6c46-451b-9e21-49112c600068", 00:14:36.631 "assigned_rate_limits": { 00:14:36.631 "rw_ios_per_sec": 0, 00:14:36.631 "rw_mbytes_per_sec": 0, 00:14:36.631 "r_mbytes_per_sec": 0, 00:14:36.631 "w_mbytes_per_sec": 0 00:14:36.631 }, 00:14:36.631 "claimed": true, 00:14:36.631 "claim_type": "exclusive_write", 00:14:36.631 "zoned": false, 00:14:36.631 "supported_io_types": { 00:14:36.631 "read": true, 00:14:36.631 "write": true, 00:14:36.631 "unmap": true, 00:14:36.631 "flush": true, 00:14:36.631 "reset": true, 00:14:36.631 "nvme_admin": false, 00:14:36.631 "nvme_io": false, 00:14:36.631 "nvme_io_md": false, 00:14:36.631 "write_zeroes": true, 00:14:36.631 "zcopy": true, 00:14:36.631 "get_zone_info": false, 00:14:36.631 "zone_management": false, 00:14:36.631 "zone_append": false, 00:14:36.631 "compare": false, 00:14:36.631 "compare_and_write": false, 00:14:36.631 "abort": true, 00:14:36.631 "seek_hole": false, 00:14:36.631 "seek_data": false, 00:14:36.631 "copy": true, 00:14:36.631 "nvme_iov_md": false 00:14:36.631 }, 00:14:36.631 "memory_domains": [ 00:14:36.631 { 00:14:36.631 "dma_device_id": "system", 00:14:36.631 "dma_device_type": 1 00:14:36.631 }, 00:14:36.631 { 00:14:36.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.631 "dma_device_type": 2 00:14:36.631 } 00:14:36.631 ], 00:14:36.631 "driver_specific": {} 00:14:36.631 } 00:14:36.631 ] 00:14:36.631 18:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:36.631 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:36.631 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:36.631 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:14:36.631 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:36.631 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:36.631 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:36.631 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:36.631 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:36.631 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:36.631 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:36.631 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:36.631 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:36.631 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.631 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.889 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:36.889 "name": "Existed_Raid", 00:14:36.889 "uuid": "83ab785b-5c49-472b-8229-d1a78fdb0390", 00:14:36.889 "strip_size_kb": 64, 00:14:36.889 "state": "online", 00:14:36.889 "raid_level": "concat", 00:14:36.889 "superblock": true, 00:14:36.889 "num_base_bdevs": 3, 00:14:36.889 "num_base_bdevs_discovered": 3, 00:14:36.889 "num_base_bdevs_operational": 3, 00:14:36.889 "base_bdevs_list": [ 00:14:36.889 { 00:14:36.889 "name": "BaseBdev1", 00:14:36.889 "uuid": "ff22933c-88fb-4001-aade-617a26442f04", 00:14:36.889 "is_configured": true, 00:14:36.889 "data_offset": 2048, 00:14:36.889 "data_size": 63488 00:14:36.889 }, 00:14:36.889 { 00:14:36.889 "name": "BaseBdev2", 00:14:36.889 "uuid": "9c3b5d7e-b351-4a03-9a88-c40d15b50fab", 00:14:36.889 "is_configured": true, 00:14:36.889 "data_offset": 2048, 00:14:36.889 "data_size": 63488 00:14:36.889 }, 00:14:36.889 { 00:14:36.889 "name": "BaseBdev3", 00:14:36.889 "uuid": "a6281f6e-6c46-451b-9e21-49112c600068", 00:14:36.889 "is_configured": true, 00:14:36.889 "data_offset": 2048, 00:14:36.889 "data_size": 63488 00:14:36.889 } 00:14:36.889 ] 00:14:36.889 }' 00:14:36.889 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:36.889 18:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.831 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:37.831 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:37.831 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:37.831 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:37.831 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:37.831 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:37.831 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:37.831 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:37.831 [2024-07-15 18:29:23.272961] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:37.831 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:37.831 "name": "Existed_Raid", 00:14:37.831 "aliases": [ 00:14:37.831 "83ab785b-5c49-472b-8229-d1a78fdb0390" 00:14:37.831 ], 00:14:37.831 "product_name": "Raid Volume", 00:14:37.831 "block_size": 512, 00:14:37.831 "num_blocks": 190464, 00:14:37.831 "uuid": "83ab785b-5c49-472b-8229-d1a78fdb0390", 00:14:37.831 "assigned_rate_limits": { 00:14:37.831 "rw_ios_per_sec": 0, 00:14:37.831 "rw_mbytes_per_sec": 0, 00:14:37.831 "r_mbytes_per_sec": 0, 00:14:37.831 "w_mbytes_per_sec": 0 00:14:37.831 }, 00:14:37.831 "claimed": false, 00:14:37.831 "zoned": false, 00:14:37.831 "supported_io_types": { 00:14:37.831 "read": true, 00:14:37.831 "write": true, 00:14:37.831 "unmap": true, 00:14:37.831 "flush": true, 00:14:37.831 "reset": true, 00:14:37.831 "nvme_admin": false, 00:14:37.831 "nvme_io": false, 00:14:37.831 "nvme_io_md": false, 00:14:37.831 "write_zeroes": true, 00:14:37.831 "zcopy": false, 00:14:37.831 "get_zone_info": false, 00:14:37.831 "zone_management": false, 00:14:37.831 "zone_append": false, 00:14:37.831 "compare": false, 00:14:37.831 "compare_and_write": false, 00:14:37.831 "abort": false, 00:14:37.831 "seek_hole": false, 00:14:37.831 "seek_data": false, 00:14:37.831 "copy": false, 00:14:37.831 "nvme_iov_md": false 00:14:37.831 }, 00:14:37.831 "memory_domains": [ 00:14:37.831 { 00:14:37.831 "dma_device_id": "system", 00:14:37.831 "dma_device_type": 1 00:14:37.831 }, 00:14:37.831 { 00:14:37.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.831 "dma_device_type": 2 00:14:37.831 }, 00:14:37.831 { 00:14:37.832 "dma_device_id": "system", 00:14:37.832 "dma_device_type": 1 00:14:37.832 }, 00:14:37.832 { 00:14:37.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.832 "dma_device_type": 2 00:14:37.832 }, 00:14:37.832 { 00:14:37.832 "dma_device_id": "system", 00:14:37.832 "dma_device_type": 1 00:14:37.832 }, 00:14:37.832 { 00:14:37.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.832 "dma_device_type": 2 00:14:37.832 } 00:14:37.832 ], 00:14:37.832 "driver_specific": { 00:14:37.832 "raid": { 00:14:37.832 "uuid": "83ab785b-5c49-472b-8229-d1a78fdb0390", 00:14:37.832 "strip_size_kb": 64, 00:14:37.832 "state": "online", 00:14:37.832 "raid_level": "concat", 00:14:37.832 "superblock": true, 00:14:37.832 "num_base_bdevs": 3, 00:14:37.832 "num_base_bdevs_discovered": 3, 00:14:37.832 "num_base_bdevs_operational": 3, 00:14:37.832 "base_bdevs_list": [ 00:14:37.832 { 00:14:37.832 "name": "BaseBdev1", 00:14:37.832 "uuid": "ff22933c-88fb-4001-aade-617a26442f04", 00:14:37.832 "is_configured": true, 00:14:37.832 "data_offset": 2048, 00:14:37.832 "data_size": 63488 00:14:37.832 }, 00:14:37.832 { 00:14:37.832 "name": "BaseBdev2", 00:14:37.832 "uuid": "9c3b5d7e-b351-4a03-9a88-c40d15b50fab", 00:14:37.832 "is_configured": true, 00:14:37.832 "data_offset": 2048, 00:14:37.832 "data_size": 63488 00:14:37.832 }, 00:14:37.832 { 00:14:37.832 "name": "BaseBdev3", 00:14:37.832 "uuid": "a6281f6e-6c46-451b-9e21-49112c600068", 00:14:37.832 "is_configured": true, 00:14:37.832 "data_offset": 2048, 00:14:37.832 "data_size": 63488 00:14:37.832 } 00:14:37.832 ] 00:14:37.832 } 00:14:37.832 } 00:14:37.832 }' 00:14:37.832 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:37.832 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:37.832 BaseBdev2 00:14:37.832 BaseBdev3' 00:14:37.832 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:37.832 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:37.832 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:38.099 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:38.099 "name": "BaseBdev1", 00:14:38.099 "aliases": [ 00:14:38.099 "ff22933c-88fb-4001-aade-617a26442f04" 00:14:38.099 ], 00:14:38.099 "product_name": "Malloc disk", 00:14:38.099 "block_size": 512, 00:14:38.099 "num_blocks": 65536, 00:14:38.099 "uuid": "ff22933c-88fb-4001-aade-617a26442f04", 00:14:38.099 "assigned_rate_limits": { 00:14:38.099 "rw_ios_per_sec": 0, 00:14:38.099 "rw_mbytes_per_sec": 0, 00:14:38.099 "r_mbytes_per_sec": 0, 00:14:38.099 "w_mbytes_per_sec": 0 00:14:38.099 }, 00:14:38.099 "claimed": true, 00:14:38.099 "claim_type": "exclusive_write", 00:14:38.099 "zoned": false, 00:14:38.099 "supported_io_types": { 00:14:38.099 "read": true, 00:14:38.099 "write": true, 00:14:38.099 "unmap": true, 00:14:38.099 "flush": true, 00:14:38.099 "reset": true, 00:14:38.099 "nvme_admin": false, 00:14:38.099 "nvme_io": false, 00:14:38.099 "nvme_io_md": false, 00:14:38.100 "write_zeroes": true, 00:14:38.100 "zcopy": true, 00:14:38.100 "get_zone_info": false, 00:14:38.100 "zone_management": false, 00:14:38.100 "zone_append": false, 00:14:38.100 "compare": false, 00:14:38.100 "compare_and_write": false, 00:14:38.100 "abort": true, 00:14:38.100 "seek_hole": false, 00:14:38.100 "seek_data": false, 00:14:38.100 "copy": true, 00:14:38.100 "nvme_iov_md": false 00:14:38.100 }, 00:14:38.100 "memory_domains": [ 00:14:38.100 { 00:14:38.100 "dma_device_id": "system", 00:14:38.100 "dma_device_type": 1 00:14:38.100 }, 00:14:38.100 { 00:14:38.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.100 "dma_device_type": 2 00:14:38.100 } 00:14:38.100 ], 00:14:38.100 "driver_specific": {} 00:14:38.100 }' 00:14:38.100 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:38.100 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:38.358 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:38.358 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:38.358 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:38.358 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:38.358 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:38.358 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:38.358 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:38.358 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:38.618 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:38.618 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:38.618 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:38.618 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:38.618 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:38.877 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:38.877 "name": "BaseBdev2", 00:14:38.877 "aliases": [ 00:14:38.877 "9c3b5d7e-b351-4a03-9a88-c40d15b50fab" 00:14:38.877 ], 00:14:38.877 "product_name": "Malloc disk", 00:14:38.877 "block_size": 512, 00:14:38.877 "num_blocks": 65536, 00:14:38.877 "uuid": "9c3b5d7e-b351-4a03-9a88-c40d15b50fab", 00:14:38.877 "assigned_rate_limits": { 00:14:38.877 "rw_ios_per_sec": 0, 00:14:38.877 "rw_mbytes_per_sec": 0, 00:14:38.877 "r_mbytes_per_sec": 0, 00:14:38.877 "w_mbytes_per_sec": 0 00:14:38.877 }, 00:14:38.877 "claimed": true, 00:14:38.877 "claim_type": "exclusive_write", 00:14:38.877 "zoned": false, 00:14:38.877 "supported_io_types": { 00:14:38.877 "read": true, 00:14:38.877 "write": true, 00:14:38.877 "unmap": true, 00:14:38.877 "flush": true, 00:14:38.877 "reset": true, 00:14:38.877 "nvme_admin": false, 00:14:38.877 "nvme_io": false, 00:14:38.877 "nvme_io_md": false, 00:14:38.877 "write_zeroes": true, 00:14:38.877 "zcopy": true, 00:14:38.877 "get_zone_info": false, 00:14:38.877 "zone_management": false, 00:14:38.877 "zone_append": false, 00:14:38.877 "compare": false, 00:14:38.877 "compare_and_write": false, 00:14:38.877 "abort": true, 00:14:38.877 "seek_hole": false, 00:14:38.877 "seek_data": false, 00:14:38.877 "copy": true, 00:14:38.877 "nvme_iov_md": false 00:14:38.877 }, 00:14:38.877 "memory_domains": [ 00:14:38.877 { 00:14:38.877 "dma_device_id": "system", 00:14:38.877 "dma_device_type": 1 00:14:38.877 }, 00:14:38.877 { 00:14:38.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.877 "dma_device_type": 2 00:14:38.877 } 00:14:38.877 ], 00:14:38.877 "driver_specific": {} 00:14:38.877 }' 00:14:38.877 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:38.877 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:38.877 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:38.877 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:38.877 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:38.877 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:38.877 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:39.135 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:39.135 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:39.135 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:39.135 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:39.135 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:39.135 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:39.135 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:39.135 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:39.393 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:39.393 "name": "BaseBdev3", 00:14:39.393 "aliases": [ 00:14:39.393 "a6281f6e-6c46-451b-9e21-49112c600068" 00:14:39.393 ], 00:14:39.393 "product_name": "Malloc disk", 00:14:39.393 "block_size": 512, 00:14:39.393 "num_blocks": 65536, 00:14:39.393 "uuid": "a6281f6e-6c46-451b-9e21-49112c600068", 00:14:39.393 "assigned_rate_limits": { 00:14:39.393 "rw_ios_per_sec": 0, 00:14:39.393 "rw_mbytes_per_sec": 0, 00:14:39.393 "r_mbytes_per_sec": 0, 00:14:39.393 "w_mbytes_per_sec": 0 00:14:39.393 }, 00:14:39.393 "claimed": true, 00:14:39.393 "claim_type": "exclusive_write", 00:14:39.393 "zoned": false, 00:14:39.393 "supported_io_types": { 00:14:39.393 "read": true, 00:14:39.393 "write": true, 00:14:39.393 "unmap": true, 00:14:39.393 "flush": true, 00:14:39.393 "reset": true, 00:14:39.393 "nvme_admin": false, 00:14:39.393 "nvme_io": false, 00:14:39.393 "nvme_io_md": false, 00:14:39.393 "write_zeroes": true, 00:14:39.393 "zcopy": true, 00:14:39.393 "get_zone_info": false, 00:14:39.393 "zone_management": false, 00:14:39.393 "zone_append": false, 00:14:39.393 "compare": false, 00:14:39.393 "compare_and_write": false, 00:14:39.393 "abort": true, 00:14:39.393 "seek_hole": false, 00:14:39.393 "seek_data": false, 00:14:39.394 "copy": true, 00:14:39.394 "nvme_iov_md": false 00:14:39.394 }, 00:14:39.394 "memory_domains": [ 00:14:39.394 { 00:14:39.394 "dma_device_id": "system", 00:14:39.394 "dma_device_type": 1 00:14:39.394 }, 00:14:39.394 { 00:14:39.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.394 "dma_device_type": 2 00:14:39.394 } 00:14:39.394 ], 00:14:39.394 "driver_specific": {} 00:14:39.394 }' 00:14:39.394 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:39.394 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:39.651 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:39.651 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:39.651 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:39.651 18:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:39.651 18:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:39.651 18:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:39.651 18:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:39.651 18:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:39.651 18:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:39.909 18:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:39.909 18:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:39.909 [2024-07-15 18:29:25.454568] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:39.909 [2024-07-15 18:29:25.454594] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:39.909 [2024-07-15 18:29:25.454632] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:40.168 18:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:40.168 18:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:14:40.168 18:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:40.168 18:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:14:40.168 18:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:40.168 18:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:14:40.168 18:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:40.168 18:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:40.168 18:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:40.168 18:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:40.168 18:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:40.168 18:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:40.168 18:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:40.168 18:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:40.168 18:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:40.168 18:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:40.168 18:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.427 18:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:40.427 "name": "Existed_Raid", 00:14:40.427 "uuid": "83ab785b-5c49-472b-8229-d1a78fdb0390", 00:14:40.427 "strip_size_kb": 64, 00:14:40.427 "state": "offline", 00:14:40.427 "raid_level": "concat", 00:14:40.427 "superblock": true, 00:14:40.427 "num_base_bdevs": 3, 00:14:40.427 "num_base_bdevs_discovered": 2, 00:14:40.427 "num_base_bdevs_operational": 2, 00:14:40.427 "base_bdevs_list": [ 00:14:40.427 { 00:14:40.427 "name": null, 00:14:40.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.427 "is_configured": false, 00:14:40.427 "data_offset": 2048, 00:14:40.427 "data_size": 63488 00:14:40.427 }, 00:14:40.427 { 00:14:40.427 "name": "BaseBdev2", 00:14:40.427 "uuid": "9c3b5d7e-b351-4a03-9a88-c40d15b50fab", 00:14:40.427 "is_configured": true, 00:14:40.427 "data_offset": 2048, 00:14:40.427 "data_size": 63488 00:14:40.427 }, 00:14:40.427 { 00:14:40.427 "name": "BaseBdev3", 00:14:40.427 "uuid": "a6281f6e-6c46-451b-9e21-49112c600068", 00:14:40.427 "is_configured": true, 00:14:40.427 "data_offset": 2048, 00:14:40.427 "data_size": 63488 00:14:40.427 } 00:14:40.427 ] 00:14:40.427 }' 00:14:40.427 18:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:40.427 18:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.027 18:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:41.027 18:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:41.027 18:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.027 18:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:41.286 18:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:41.286 18:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:41.286 18:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:41.543 [2024-07-15 18:29:26.851479] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:41.543 18:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:41.543 18:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:41.543 18:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.543 18:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:41.801 18:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:41.801 18:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:41.801 18:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:42.059 [2024-07-15 18:29:27.375397] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:42.059 [2024-07-15 18:29:27.375439] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23d3360 name Existed_Raid, state offline 00:14:42.059 18:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:42.059 18:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:42.059 18:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.059 18:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:42.339 18:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:42.339 18:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:42.339 18:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:14:42.339 18:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:14:42.339 18:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:42.339 18:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:42.598 BaseBdev2 00:14:42.598 18:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:14:42.598 18:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:42.598 18:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:42.598 18:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:42.598 18:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:42.598 18:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:42.598 18:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:42.857 18:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:43.115 [ 00:14:43.115 { 00:14:43.115 "name": "BaseBdev2", 00:14:43.115 "aliases": [ 00:14:43.115 "1b376f34-bb89-4500-91b4-ebb100200c41" 00:14:43.115 ], 00:14:43.115 "product_name": "Malloc disk", 00:14:43.115 "block_size": 512, 00:14:43.115 "num_blocks": 65536, 00:14:43.115 "uuid": "1b376f34-bb89-4500-91b4-ebb100200c41", 00:14:43.115 "assigned_rate_limits": { 00:14:43.115 "rw_ios_per_sec": 0, 00:14:43.115 "rw_mbytes_per_sec": 0, 00:14:43.115 "r_mbytes_per_sec": 0, 00:14:43.115 "w_mbytes_per_sec": 0 00:14:43.115 }, 00:14:43.115 "claimed": false, 00:14:43.115 "zoned": false, 00:14:43.115 "supported_io_types": { 00:14:43.115 "read": true, 00:14:43.115 "write": true, 00:14:43.115 "unmap": true, 00:14:43.115 "flush": true, 00:14:43.115 "reset": true, 00:14:43.115 "nvme_admin": false, 00:14:43.115 "nvme_io": false, 00:14:43.115 "nvme_io_md": false, 00:14:43.115 "write_zeroes": true, 00:14:43.115 "zcopy": true, 00:14:43.115 "get_zone_info": false, 00:14:43.115 "zone_management": false, 00:14:43.115 "zone_append": false, 00:14:43.115 "compare": false, 00:14:43.115 "compare_and_write": false, 00:14:43.115 "abort": true, 00:14:43.115 "seek_hole": false, 00:14:43.115 "seek_data": false, 00:14:43.115 "copy": true, 00:14:43.115 "nvme_iov_md": false 00:14:43.115 }, 00:14:43.115 "memory_domains": [ 00:14:43.115 { 00:14:43.115 "dma_device_id": "system", 00:14:43.115 "dma_device_type": 1 00:14:43.115 }, 00:14:43.115 { 00:14:43.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.115 "dma_device_type": 2 00:14:43.115 } 00:14:43.115 ], 00:14:43.115 "driver_specific": {} 00:14:43.115 } 00:14:43.115 ] 00:14:43.115 18:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:43.115 18:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:43.115 18:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:43.115 18:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:43.374 BaseBdev3 00:14:43.374 18:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:14:43.374 18:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:43.374 18:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:43.374 18:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:43.374 18:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:43.374 18:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:43.374 18:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:43.632 18:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:43.632 [ 00:14:43.632 { 00:14:43.632 "name": "BaseBdev3", 00:14:43.632 "aliases": [ 00:14:43.632 "ec688d4b-cb75-4760-bad7-744aa263d9aa" 00:14:43.632 ], 00:14:43.632 "product_name": "Malloc disk", 00:14:43.632 "block_size": 512, 00:14:43.632 "num_blocks": 65536, 00:14:43.632 "uuid": "ec688d4b-cb75-4760-bad7-744aa263d9aa", 00:14:43.632 "assigned_rate_limits": { 00:14:43.632 "rw_ios_per_sec": 0, 00:14:43.632 "rw_mbytes_per_sec": 0, 00:14:43.632 "r_mbytes_per_sec": 0, 00:14:43.632 "w_mbytes_per_sec": 0 00:14:43.632 }, 00:14:43.632 "claimed": false, 00:14:43.632 "zoned": false, 00:14:43.632 "supported_io_types": { 00:14:43.632 "read": true, 00:14:43.632 "write": true, 00:14:43.632 "unmap": true, 00:14:43.632 "flush": true, 00:14:43.632 "reset": true, 00:14:43.632 "nvme_admin": false, 00:14:43.632 "nvme_io": false, 00:14:43.632 "nvme_io_md": false, 00:14:43.632 "write_zeroes": true, 00:14:43.632 "zcopy": true, 00:14:43.632 "get_zone_info": false, 00:14:43.632 "zone_management": false, 00:14:43.632 "zone_append": false, 00:14:43.632 "compare": false, 00:14:43.632 "compare_and_write": false, 00:14:43.632 "abort": true, 00:14:43.632 "seek_hole": false, 00:14:43.632 "seek_data": false, 00:14:43.632 "copy": true, 00:14:43.632 "nvme_iov_md": false 00:14:43.632 }, 00:14:43.632 "memory_domains": [ 00:14:43.632 { 00:14:43.632 "dma_device_id": "system", 00:14:43.632 "dma_device_type": 1 00:14:43.632 }, 00:14:43.632 { 00:14:43.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.632 "dma_device_type": 2 00:14:43.632 } 00:14:43.632 ], 00:14:43.632 "driver_specific": {} 00:14:43.632 } 00:14:43.632 ] 00:14:43.891 18:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:43.891 18:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:43.891 18:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:43.891 18:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:43.891 [2024-07-15 18:29:29.428153] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:43.891 [2024-07-15 18:29:29.428191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:43.891 [2024-07-15 18:29:29.428209] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:43.891 [2024-07-15 18:29:29.429589] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:44.149 18:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:44.149 18:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:44.149 18:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:44.149 18:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:44.149 18:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:44.149 18:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:44.149 18:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:44.149 18:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:44.149 18:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:44.149 18:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:44.149 18:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.149 18:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.149 18:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:44.149 "name": "Existed_Raid", 00:14:44.149 "uuid": "4c307119-4130-413d-9ae1-a7155dcb9a67", 00:14:44.149 "strip_size_kb": 64, 00:14:44.149 "state": "configuring", 00:14:44.149 "raid_level": "concat", 00:14:44.149 "superblock": true, 00:14:44.149 "num_base_bdevs": 3, 00:14:44.149 "num_base_bdevs_discovered": 2, 00:14:44.149 "num_base_bdevs_operational": 3, 00:14:44.149 "base_bdevs_list": [ 00:14:44.149 { 00:14:44.149 "name": "BaseBdev1", 00:14:44.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.149 "is_configured": false, 00:14:44.149 "data_offset": 0, 00:14:44.149 "data_size": 0 00:14:44.149 }, 00:14:44.149 { 00:14:44.149 "name": "BaseBdev2", 00:14:44.149 "uuid": "1b376f34-bb89-4500-91b4-ebb100200c41", 00:14:44.149 "is_configured": true, 00:14:44.149 "data_offset": 2048, 00:14:44.149 "data_size": 63488 00:14:44.149 }, 00:14:44.149 { 00:14:44.149 "name": "BaseBdev3", 00:14:44.149 "uuid": "ec688d4b-cb75-4760-bad7-744aa263d9aa", 00:14:44.149 "is_configured": true, 00:14:44.149 "data_offset": 2048, 00:14:44.149 "data_size": 63488 00:14:44.149 } 00:14:44.149 ] 00:14:44.149 }' 00:14:44.149 18:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:44.149 18:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.084 18:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:45.084 [2024-07-15 18:29:30.595260] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:45.084 18:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:45.084 18:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:45.084 18:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:45.084 18:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:45.084 18:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:45.084 18:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:45.084 18:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:45.084 18:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:45.085 18:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:45.085 18:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:45.085 18:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.085 18:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.342 18:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:45.342 "name": "Existed_Raid", 00:14:45.342 "uuid": "4c307119-4130-413d-9ae1-a7155dcb9a67", 00:14:45.342 "strip_size_kb": 64, 00:14:45.342 "state": "configuring", 00:14:45.342 "raid_level": "concat", 00:14:45.342 "superblock": true, 00:14:45.342 "num_base_bdevs": 3, 00:14:45.342 "num_base_bdevs_discovered": 1, 00:14:45.342 "num_base_bdevs_operational": 3, 00:14:45.342 "base_bdevs_list": [ 00:14:45.342 { 00:14:45.342 "name": "BaseBdev1", 00:14:45.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.342 "is_configured": false, 00:14:45.342 "data_offset": 0, 00:14:45.342 "data_size": 0 00:14:45.342 }, 00:14:45.342 { 00:14:45.342 "name": null, 00:14:45.342 "uuid": "1b376f34-bb89-4500-91b4-ebb100200c41", 00:14:45.342 "is_configured": false, 00:14:45.342 "data_offset": 2048, 00:14:45.342 "data_size": 63488 00:14:45.342 }, 00:14:45.342 { 00:14:45.342 "name": "BaseBdev3", 00:14:45.342 "uuid": "ec688d4b-cb75-4760-bad7-744aa263d9aa", 00:14:45.342 "is_configured": true, 00:14:45.342 "data_offset": 2048, 00:14:45.342 "data_size": 63488 00:14:45.342 } 00:14:45.342 ] 00:14:45.342 }' 00:14:45.342 18:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:45.342 18:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.277 18:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.277 18:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:46.277 18:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:14:46.277 18:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:46.538 [2024-07-15 18:29:32.006324] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:46.538 BaseBdev1 00:14:46.538 18:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:14:46.538 18:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:46.538 18:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:46.538 18:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:46.538 18:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:46.538 18:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:46.538 18:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:46.796 18:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:47.055 [ 00:14:47.055 { 00:14:47.055 "name": "BaseBdev1", 00:14:47.055 "aliases": [ 00:14:47.055 "0a4f3e9b-b802-4e71-9008-17dd0fb0f9a0" 00:14:47.055 ], 00:14:47.055 "product_name": "Malloc disk", 00:14:47.055 "block_size": 512, 00:14:47.055 "num_blocks": 65536, 00:14:47.055 "uuid": "0a4f3e9b-b802-4e71-9008-17dd0fb0f9a0", 00:14:47.055 "assigned_rate_limits": { 00:14:47.055 "rw_ios_per_sec": 0, 00:14:47.055 "rw_mbytes_per_sec": 0, 00:14:47.055 "r_mbytes_per_sec": 0, 00:14:47.055 "w_mbytes_per_sec": 0 00:14:47.055 }, 00:14:47.055 "claimed": true, 00:14:47.055 "claim_type": "exclusive_write", 00:14:47.055 "zoned": false, 00:14:47.055 "supported_io_types": { 00:14:47.055 "read": true, 00:14:47.055 "write": true, 00:14:47.055 "unmap": true, 00:14:47.055 "flush": true, 00:14:47.055 "reset": true, 00:14:47.055 "nvme_admin": false, 00:14:47.055 "nvme_io": false, 00:14:47.055 "nvme_io_md": false, 00:14:47.055 "write_zeroes": true, 00:14:47.055 "zcopy": true, 00:14:47.055 "get_zone_info": false, 00:14:47.055 "zone_management": false, 00:14:47.055 "zone_append": false, 00:14:47.055 "compare": false, 00:14:47.055 "compare_and_write": false, 00:14:47.055 "abort": true, 00:14:47.055 "seek_hole": false, 00:14:47.055 "seek_data": false, 00:14:47.055 "copy": true, 00:14:47.055 "nvme_iov_md": false 00:14:47.055 }, 00:14:47.055 "memory_domains": [ 00:14:47.055 { 00:14:47.055 "dma_device_id": "system", 00:14:47.055 "dma_device_type": 1 00:14:47.055 }, 00:14:47.055 { 00:14:47.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.055 "dma_device_type": 2 00:14:47.055 } 00:14:47.055 ], 00:14:47.055 "driver_specific": {} 00:14:47.055 } 00:14:47.055 ] 00:14:47.055 18:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:47.055 18:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:47.056 18:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:47.056 18:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:47.056 18:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:47.056 18:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:47.056 18:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:47.056 18:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:47.056 18:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:47.056 18:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:47.056 18:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:47.056 18:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.056 18:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.314 18:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:47.314 "name": "Existed_Raid", 00:14:47.314 "uuid": "4c307119-4130-413d-9ae1-a7155dcb9a67", 00:14:47.314 "strip_size_kb": 64, 00:14:47.314 "state": "configuring", 00:14:47.314 "raid_level": "concat", 00:14:47.314 "superblock": true, 00:14:47.314 "num_base_bdevs": 3, 00:14:47.314 "num_base_bdevs_discovered": 2, 00:14:47.314 "num_base_bdevs_operational": 3, 00:14:47.314 "base_bdevs_list": [ 00:14:47.314 { 00:14:47.314 "name": "BaseBdev1", 00:14:47.314 "uuid": "0a4f3e9b-b802-4e71-9008-17dd0fb0f9a0", 00:14:47.314 "is_configured": true, 00:14:47.314 "data_offset": 2048, 00:14:47.314 "data_size": 63488 00:14:47.314 }, 00:14:47.314 { 00:14:47.314 "name": null, 00:14:47.314 "uuid": "1b376f34-bb89-4500-91b4-ebb100200c41", 00:14:47.314 "is_configured": false, 00:14:47.314 "data_offset": 2048, 00:14:47.314 "data_size": 63488 00:14:47.314 }, 00:14:47.314 { 00:14:47.314 "name": "BaseBdev3", 00:14:47.314 "uuid": "ec688d4b-cb75-4760-bad7-744aa263d9aa", 00:14:47.314 "is_configured": true, 00:14:47.314 "data_offset": 2048, 00:14:47.314 "data_size": 63488 00:14:47.314 } 00:14:47.314 ] 00:14:47.314 }' 00:14:47.314 18:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:47.314 18:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.894 18:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.155 18:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:48.155 18:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:14:48.155 18:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:48.413 [2024-07-15 18:29:33.939543] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:48.413 18:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:48.413 18:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:48.413 18:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:48.413 18:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:48.413 18:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:48.413 18:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:48.413 18:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:48.413 18:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:48.413 18:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:48.413 18:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:48.413 18:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.413 18:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.670 18:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:48.670 "name": "Existed_Raid", 00:14:48.671 "uuid": "4c307119-4130-413d-9ae1-a7155dcb9a67", 00:14:48.671 "strip_size_kb": 64, 00:14:48.671 "state": "configuring", 00:14:48.671 "raid_level": "concat", 00:14:48.671 "superblock": true, 00:14:48.671 "num_base_bdevs": 3, 00:14:48.671 "num_base_bdevs_discovered": 1, 00:14:48.671 "num_base_bdevs_operational": 3, 00:14:48.671 "base_bdevs_list": [ 00:14:48.671 { 00:14:48.671 "name": "BaseBdev1", 00:14:48.671 "uuid": "0a4f3e9b-b802-4e71-9008-17dd0fb0f9a0", 00:14:48.671 "is_configured": true, 00:14:48.671 "data_offset": 2048, 00:14:48.671 "data_size": 63488 00:14:48.671 }, 00:14:48.671 { 00:14:48.671 "name": null, 00:14:48.671 "uuid": "1b376f34-bb89-4500-91b4-ebb100200c41", 00:14:48.671 "is_configured": false, 00:14:48.671 "data_offset": 2048, 00:14:48.671 "data_size": 63488 00:14:48.671 }, 00:14:48.671 { 00:14:48.671 "name": null, 00:14:48.671 "uuid": "ec688d4b-cb75-4760-bad7-744aa263d9aa", 00:14:48.671 "is_configured": false, 00:14:48.671 "data_offset": 2048, 00:14:48.671 "data_size": 63488 00:14:48.671 } 00:14:48.671 ] 00:14:48.671 }' 00:14:48.671 18:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:48.671 18:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.605 18:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.605 18:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:49.605 18:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:14:49.605 18:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:49.864 [2024-07-15 18:29:35.331304] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:49.864 18:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:49.864 18:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:49.864 18:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:49.864 18:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:49.864 18:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:49.864 18:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:49.864 18:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:49.864 18:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:49.864 18:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:49.864 18:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:49.864 18:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.864 18:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.121 18:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:50.121 "name": "Existed_Raid", 00:14:50.121 "uuid": "4c307119-4130-413d-9ae1-a7155dcb9a67", 00:14:50.121 "strip_size_kb": 64, 00:14:50.121 "state": "configuring", 00:14:50.121 "raid_level": "concat", 00:14:50.121 "superblock": true, 00:14:50.121 "num_base_bdevs": 3, 00:14:50.121 "num_base_bdevs_discovered": 2, 00:14:50.121 "num_base_bdevs_operational": 3, 00:14:50.121 "base_bdevs_list": [ 00:14:50.121 { 00:14:50.121 "name": "BaseBdev1", 00:14:50.121 "uuid": "0a4f3e9b-b802-4e71-9008-17dd0fb0f9a0", 00:14:50.121 "is_configured": true, 00:14:50.121 "data_offset": 2048, 00:14:50.121 "data_size": 63488 00:14:50.121 }, 00:14:50.121 { 00:14:50.121 "name": null, 00:14:50.121 "uuid": "1b376f34-bb89-4500-91b4-ebb100200c41", 00:14:50.121 "is_configured": false, 00:14:50.121 "data_offset": 2048, 00:14:50.121 "data_size": 63488 00:14:50.121 }, 00:14:50.121 { 00:14:50.121 "name": "BaseBdev3", 00:14:50.121 "uuid": "ec688d4b-cb75-4760-bad7-744aa263d9aa", 00:14:50.121 "is_configured": true, 00:14:50.121 "data_offset": 2048, 00:14:50.121 "data_size": 63488 00:14:50.121 } 00:14:50.121 ] 00:14:50.121 }' 00:14:50.121 18:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:50.121 18:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.688 18:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.688 18:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:50.947 18:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:14:50.947 18:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:51.206 [2024-07-15 18:29:36.703017] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:51.206 18:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:51.206 18:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:51.206 18:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:51.206 18:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:51.206 18:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:51.206 18:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:51.206 18:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:51.206 18:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:51.206 18:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:51.206 18:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:51.206 18:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.206 18:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.464 18:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:51.464 "name": "Existed_Raid", 00:14:51.464 "uuid": "4c307119-4130-413d-9ae1-a7155dcb9a67", 00:14:51.464 "strip_size_kb": 64, 00:14:51.464 "state": "configuring", 00:14:51.464 "raid_level": "concat", 00:14:51.464 "superblock": true, 00:14:51.464 "num_base_bdevs": 3, 00:14:51.464 "num_base_bdevs_discovered": 1, 00:14:51.464 "num_base_bdevs_operational": 3, 00:14:51.464 "base_bdevs_list": [ 00:14:51.465 { 00:14:51.465 "name": null, 00:14:51.465 "uuid": "0a4f3e9b-b802-4e71-9008-17dd0fb0f9a0", 00:14:51.465 "is_configured": false, 00:14:51.465 "data_offset": 2048, 00:14:51.465 "data_size": 63488 00:14:51.465 }, 00:14:51.465 { 00:14:51.465 "name": null, 00:14:51.465 "uuid": "1b376f34-bb89-4500-91b4-ebb100200c41", 00:14:51.465 "is_configured": false, 00:14:51.465 "data_offset": 2048, 00:14:51.465 "data_size": 63488 00:14:51.465 }, 00:14:51.465 { 00:14:51.465 "name": "BaseBdev3", 00:14:51.465 "uuid": "ec688d4b-cb75-4760-bad7-744aa263d9aa", 00:14:51.465 "is_configured": true, 00:14:51.465 "data_offset": 2048, 00:14:51.465 "data_size": 63488 00:14:51.465 } 00:14:51.465 ] 00:14:51.465 }' 00:14:51.465 18:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:51.465 18:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.402 18:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.402 18:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:52.402 18:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:14:52.402 18:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:52.661 [2024-07-15 18:29:38.097227] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:52.661 18:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:52.661 18:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:52.661 18:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:52.661 18:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:52.661 18:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:52.661 18:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:52.661 18:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:52.661 18:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:52.661 18:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:52.661 18:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:52.661 18:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.661 18:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.920 18:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:52.920 "name": "Existed_Raid", 00:14:52.920 "uuid": "4c307119-4130-413d-9ae1-a7155dcb9a67", 00:14:52.920 "strip_size_kb": 64, 00:14:52.920 "state": "configuring", 00:14:52.920 "raid_level": "concat", 00:14:52.920 "superblock": true, 00:14:52.920 "num_base_bdevs": 3, 00:14:52.920 "num_base_bdevs_discovered": 2, 00:14:52.920 "num_base_bdevs_operational": 3, 00:14:52.920 "base_bdevs_list": [ 00:14:52.920 { 00:14:52.920 "name": null, 00:14:52.920 "uuid": "0a4f3e9b-b802-4e71-9008-17dd0fb0f9a0", 00:14:52.920 "is_configured": false, 00:14:52.920 "data_offset": 2048, 00:14:52.920 "data_size": 63488 00:14:52.920 }, 00:14:52.920 { 00:14:52.920 "name": "BaseBdev2", 00:14:52.920 "uuid": "1b376f34-bb89-4500-91b4-ebb100200c41", 00:14:52.920 "is_configured": true, 00:14:52.920 "data_offset": 2048, 00:14:52.920 "data_size": 63488 00:14:52.920 }, 00:14:52.920 { 00:14:52.920 "name": "BaseBdev3", 00:14:52.920 "uuid": "ec688d4b-cb75-4760-bad7-744aa263d9aa", 00:14:52.920 "is_configured": true, 00:14:52.920 "data_offset": 2048, 00:14:52.920 "data_size": 63488 00:14:52.920 } 00:14:52.920 ] 00:14:52.920 }' 00:14:52.920 18:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:52.920 18:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.487 18:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.487 18:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:53.746 18:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:14:53.746 18:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:53.746 18:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.004 18:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 0a4f3e9b-b802-4e71-9008-17dd0fb0f9a0 00:14:54.263 [2024-07-15 18:29:39.752976] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:54.263 [2024-07-15 18:29:39.753126] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2577a60 00:14:54.263 [2024-07-15 18:29:39.753138] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:54.263 [2024-07-15 18:29:39.753316] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23bf2d0 00:14:54.263 [2024-07-15 18:29:39.753433] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2577a60 00:14:54.263 [2024-07-15 18:29:39.753441] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2577a60 00:14:54.263 [2024-07-15 18:29:39.753532] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.263 NewBaseBdev 00:14:54.263 18:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:14:54.263 18:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:14:54.263 18:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:54.263 18:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:54.263 18:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:54.263 18:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:54.263 18:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:54.522 18:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:54.782 [ 00:14:54.782 { 00:14:54.782 "name": "NewBaseBdev", 00:14:54.782 "aliases": [ 00:14:54.782 "0a4f3e9b-b802-4e71-9008-17dd0fb0f9a0" 00:14:54.782 ], 00:14:54.782 "product_name": "Malloc disk", 00:14:54.782 "block_size": 512, 00:14:54.782 "num_blocks": 65536, 00:14:54.782 "uuid": "0a4f3e9b-b802-4e71-9008-17dd0fb0f9a0", 00:14:54.782 "assigned_rate_limits": { 00:14:54.782 "rw_ios_per_sec": 0, 00:14:54.782 "rw_mbytes_per_sec": 0, 00:14:54.782 "r_mbytes_per_sec": 0, 00:14:54.782 "w_mbytes_per_sec": 0 00:14:54.782 }, 00:14:54.782 "claimed": true, 00:14:54.782 "claim_type": "exclusive_write", 00:14:54.782 "zoned": false, 00:14:54.782 "supported_io_types": { 00:14:54.782 "read": true, 00:14:54.782 "write": true, 00:14:54.782 "unmap": true, 00:14:54.782 "flush": true, 00:14:54.782 "reset": true, 00:14:54.782 "nvme_admin": false, 00:14:54.782 "nvme_io": false, 00:14:54.782 "nvme_io_md": false, 00:14:54.782 "write_zeroes": true, 00:14:54.782 "zcopy": true, 00:14:54.782 "get_zone_info": false, 00:14:54.782 "zone_management": false, 00:14:54.782 "zone_append": false, 00:14:54.782 "compare": false, 00:14:54.782 "compare_and_write": false, 00:14:54.782 "abort": true, 00:14:54.782 "seek_hole": false, 00:14:54.782 "seek_data": false, 00:14:54.782 "copy": true, 00:14:54.782 "nvme_iov_md": false 00:14:54.782 }, 00:14:54.782 "memory_domains": [ 00:14:54.782 { 00:14:54.782 "dma_device_id": "system", 00:14:54.782 "dma_device_type": 1 00:14:54.782 }, 00:14:54.782 { 00:14:54.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.782 "dma_device_type": 2 00:14:54.782 } 00:14:54.782 ], 00:14:54.782 "driver_specific": {} 00:14:54.782 } 00:14:54.782 ] 00:14:54.782 18:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:54.782 18:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:14:54.782 18:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:54.782 18:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:54.782 18:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:54.782 18:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:54.782 18:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:54.782 18:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:54.782 18:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:54.782 18:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:54.782 18:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:54.782 18:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.782 18:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.041 18:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:55.041 "name": "Existed_Raid", 00:14:55.041 "uuid": "4c307119-4130-413d-9ae1-a7155dcb9a67", 00:14:55.041 "strip_size_kb": 64, 00:14:55.041 "state": "online", 00:14:55.041 "raid_level": "concat", 00:14:55.041 "superblock": true, 00:14:55.041 "num_base_bdevs": 3, 00:14:55.041 "num_base_bdevs_discovered": 3, 00:14:55.041 "num_base_bdevs_operational": 3, 00:14:55.041 "base_bdevs_list": [ 00:14:55.041 { 00:14:55.041 "name": "NewBaseBdev", 00:14:55.041 "uuid": "0a4f3e9b-b802-4e71-9008-17dd0fb0f9a0", 00:14:55.041 "is_configured": true, 00:14:55.041 "data_offset": 2048, 00:14:55.041 "data_size": 63488 00:14:55.041 }, 00:14:55.041 { 00:14:55.041 "name": "BaseBdev2", 00:14:55.041 "uuid": "1b376f34-bb89-4500-91b4-ebb100200c41", 00:14:55.041 "is_configured": true, 00:14:55.041 "data_offset": 2048, 00:14:55.041 "data_size": 63488 00:14:55.041 }, 00:14:55.041 { 00:14:55.041 "name": "BaseBdev3", 00:14:55.041 "uuid": "ec688d4b-cb75-4760-bad7-744aa263d9aa", 00:14:55.041 "is_configured": true, 00:14:55.041 "data_offset": 2048, 00:14:55.041 "data_size": 63488 00:14:55.041 } 00:14:55.041 ] 00:14:55.041 }' 00:14:55.041 18:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:55.041 18:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.671 18:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:14:55.671 18:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:55.671 18:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:55.671 18:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:55.671 18:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:55.671 18:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:55.671 18:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:55.671 18:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:55.930 [2024-07-15 18:29:41.385693] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:55.930 18:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:55.930 "name": "Existed_Raid", 00:14:55.930 "aliases": [ 00:14:55.930 "4c307119-4130-413d-9ae1-a7155dcb9a67" 00:14:55.930 ], 00:14:55.930 "product_name": "Raid Volume", 00:14:55.930 "block_size": 512, 00:14:55.930 "num_blocks": 190464, 00:14:55.930 "uuid": "4c307119-4130-413d-9ae1-a7155dcb9a67", 00:14:55.930 "assigned_rate_limits": { 00:14:55.930 "rw_ios_per_sec": 0, 00:14:55.930 "rw_mbytes_per_sec": 0, 00:14:55.930 "r_mbytes_per_sec": 0, 00:14:55.930 "w_mbytes_per_sec": 0 00:14:55.930 }, 00:14:55.930 "claimed": false, 00:14:55.930 "zoned": false, 00:14:55.930 "supported_io_types": { 00:14:55.930 "read": true, 00:14:55.930 "write": true, 00:14:55.930 "unmap": true, 00:14:55.930 "flush": true, 00:14:55.930 "reset": true, 00:14:55.930 "nvme_admin": false, 00:14:55.930 "nvme_io": false, 00:14:55.930 "nvme_io_md": false, 00:14:55.930 "write_zeroes": true, 00:14:55.930 "zcopy": false, 00:14:55.930 "get_zone_info": false, 00:14:55.930 "zone_management": false, 00:14:55.930 "zone_append": false, 00:14:55.930 "compare": false, 00:14:55.930 "compare_and_write": false, 00:14:55.930 "abort": false, 00:14:55.930 "seek_hole": false, 00:14:55.930 "seek_data": false, 00:14:55.930 "copy": false, 00:14:55.930 "nvme_iov_md": false 00:14:55.930 }, 00:14:55.930 "memory_domains": [ 00:14:55.930 { 00:14:55.930 "dma_device_id": "system", 00:14:55.930 "dma_device_type": 1 00:14:55.930 }, 00:14:55.930 { 00:14:55.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.930 "dma_device_type": 2 00:14:55.930 }, 00:14:55.930 { 00:14:55.930 "dma_device_id": "system", 00:14:55.930 "dma_device_type": 1 00:14:55.930 }, 00:14:55.930 { 00:14:55.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.930 "dma_device_type": 2 00:14:55.930 }, 00:14:55.930 { 00:14:55.930 "dma_device_id": "system", 00:14:55.930 "dma_device_type": 1 00:14:55.930 }, 00:14:55.930 { 00:14:55.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.930 "dma_device_type": 2 00:14:55.930 } 00:14:55.930 ], 00:14:55.930 "driver_specific": { 00:14:55.930 "raid": { 00:14:55.930 "uuid": "4c307119-4130-413d-9ae1-a7155dcb9a67", 00:14:55.930 "strip_size_kb": 64, 00:14:55.930 "state": "online", 00:14:55.930 "raid_level": "concat", 00:14:55.930 "superblock": true, 00:14:55.930 "num_base_bdevs": 3, 00:14:55.930 "num_base_bdevs_discovered": 3, 00:14:55.930 "num_base_bdevs_operational": 3, 00:14:55.930 "base_bdevs_list": [ 00:14:55.930 { 00:14:55.930 "name": "NewBaseBdev", 00:14:55.930 "uuid": "0a4f3e9b-b802-4e71-9008-17dd0fb0f9a0", 00:14:55.930 "is_configured": true, 00:14:55.930 "data_offset": 2048, 00:14:55.930 "data_size": 63488 00:14:55.930 }, 00:14:55.930 { 00:14:55.930 "name": "BaseBdev2", 00:14:55.930 "uuid": "1b376f34-bb89-4500-91b4-ebb100200c41", 00:14:55.930 "is_configured": true, 00:14:55.930 "data_offset": 2048, 00:14:55.930 "data_size": 63488 00:14:55.930 }, 00:14:55.930 { 00:14:55.930 "name": "BaseBdev3", 00:14:55.930 "uuid": "ec688d4b-cb75-4760-bad7-744aa263d9aa", 00:14:55.930 "is_configured": true, 00:14:55.930 "data_offset": 2048, 00:14:55.930 "data_size": 63488 00:14:55.930 } 00:14:55.930 ] 00:14:55.930 } 00:14:55.930 } 00:14:55.930 }' 00:14:55.930 18:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:55.930 18:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:14:55.930 BaseBdev2 00:14:55.930 BaseBdev3' 00:14:55.930 18:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:55.930 18:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:55.930 18:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:56.189 18:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:56.189 "name": "NewBaseBdev", 00:14:56.189 "aliases": [ 00:14:56.189 "0a4f3e9b-b802-4e71-9008-17dd0fb0f9a0" 00:14:56.189 ], 00:14:56.189 "product_name": "Malloc disk", 00:14:56.189 "block_size": 512, 00:14:56.189 "num_blocks": 65536, 00:14:56.189 "uuid": "0a4f3e9b-b802-4e71-9008-17dd0fb0f9a0", 00:14:56.189 "assigned_rate_limits": { 00:14:56.189 "rw_ios_per_sec": 0, 00:14:56.189 "rw_mbytes_per_sec": 0, 00:14:56.189 "r_mbytes_per_sec": 0, 00:14:56.189 "w_mbytes_per_sec": 0 00:14:56.189 }, 00:14:56.189 "claimed": true, 00:14:56.189 "claim_type": "exclusive_write", 00:14:56.189 "zoned": false, 00:14:56.189 "supported_io_types": { 00:14:56.189 "read": true, 00:14:56.189 "write": true, 00:14:56.189 "unmap": true, 00:14:56.189 "flush": true, 00:14:56.189 "reset": true, 00:14:56.189 "nvme_admin": false, 00:14:56.189 "nvme_io": false, 00:14:56.189 "nvme_io_md": false, 00:14:56.189 "write_zeroes": true, 00:14:56.189 "zcopy": true, 00:14:56.189 "get_zone_info": false, 00:14:56.189 "zone_management": false, 00:14:56.189 "zone_append": false, 00:14:56.189 "compare": false, 00:14:56.189 "compare_and_write": false, 00:14:56.189 "abort": true, 00:14:56.189 "seek_hole": false, 00:14:56.189 "seek_data": false, 00:14:56.189 "copy": true, 00:14:56.189 "nvme_iov_md": false 00:14:56.189 }, 00:14:56.189 "memory_domains": [ 00:14:56.189 { 00:14:56.189 "dma_device_id": "system", 00:14:56.189 "dma_device_type": 1 00:14:56.189 }, 00:14:56.189 { 00:14:56.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.189 "dma_device_type": 2 00:14:56.189 } 00:14:56.189 ], 00:14:56.189 "driver_specific": {} 00:14:56.189 }' 00:14:56.189 18:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:56.448 18:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:56.448 18:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:56.448 18:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:56.448 18:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:56.448 18:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:56.448 18:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:56.448 18:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:56.706 18:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:56.706 18:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:56.706 18:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:56.706 18:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:56.706 18:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:56.706 18:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:56.706 18:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:56.966 18:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:56.966 "name": "BaseBdev2", 00:14:56.966 "aliases": [ 00:14:56.966 "1b376f34-bb89-4500-91b4-ebb100200c41" 00:14:56.966 ], 00:14:56.966 "product_name": "Malloc disk", 00:14:56.966 "block_size": 512, 00:14:56.966 "num_blocks": 65536, 00:14:56.966 "uuid": "1b376f34-bb89-4500-91b4-ebb100200c41", 00:14:56.966 "assigned_rate_limits": { 00:14:56.966 "rw_ios_per_sec": 0, 00:14:56.966 "rw_mbytes_per_sec": 0, 00:14:56.966 "r_mbytes_per_sec": 0, 00:14:56.966 "w_mbytes_per_sec": 0 00:14:56.966 }, 00:14:56.966 "claimed": true, 00:14:56.966 "claim_type": "exclusive_write", 00:14:56.966 "zoned": false, 00:14:56.966 "supported_io_types": { 00:14:56.966 "read": true, 00:14:56.966 "write": true, 00:14:56.966 "unmap": true, 00:14:56.966 "flush": true, 00:14:56.966 "reset": true, 00:14:56.966 "nvme_admin": false, 00:14:56.966 "nvme_io": false, 00:14:56.966 "nvme_io_md": false, 00:14:56.966 "write_zeroes": true, 00:14:56.966 "zcopy": true, 00:14:56.966 "get_zone_info": false, 00:14:56.966 "zone_management": false, 00:14:56.966 "zone_append": false, 00:14:56.966 "compare": false, 00:14:56.966 "compare_and_write": false, 00:14:56.966 "abort": true, 00:14:56.966 "seek_hole": false, 00:14:56.966 "seek_data": false, 00:14:56.966 "copy": true, 00:14:56.966 "nvme_iov_md": false 00:14:56.966 }, 00:14:56.966 "memory_domains": [ 00:14:56.966 { 00:14:56.966 "dma_device_id": "system", 00:14:56.966 "dma_device_type": 1 00:14:56.966 }, 00:14:56.966 { 00:14:56.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.966 "dma_device_type": 2 00:14:56.966 } 00:14:56.966 ], 00:14:56.966 "driver_specific": {} 00:14:56.966 }' 00:14:56.966 18:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:56.966 18:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:56.966 18:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:56.966 18:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:56.966 18:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:57.224 18:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:57.224 18:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:57.224 18:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:57.224 18:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:57.225 18:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:57.225 18:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:57.225 18:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:57.225 18:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:57.225 18:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:57.225 18:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:57.483 18:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:57.483 "name": "BaseBdev3", 00:14:57.483 "aliases": [ 00:14:57.483 "ec688d4b-cb75-4760-bad7-744aa263d9aa" 00:14:57.483 ], 00:14:57.483 "product_name": "Malloc disk", 00:14:57.483 "block_size": 512, 00:14:57.483 "num_blocks": 65536, 00:14:57.483 "uuid": "ec688d4b-cb75-4760-bad7-744aa263d9aa", 00:14:57.483 "assigned_rate_limits": { 00:14:57.483 "rw_ios_per_sec": 0, 00:14:57.483 "rw_mbytes_per_sec": 0, 00:14:57.483 "r_mbytes_per_sec": 0, 00:14:57.483 "w_mbytes_per_sec": 0 00:14:57.483 }, 00:14:57.483 "claimed": true, 00:14:57.483 "claim_type": "exclusive_write", 00:14:57.483 "zoned": false, 00:14:57.483 "supported_io_types": { 00:14:57.483 "read": true, 00:14:57.483 "write": true, 00:14:57.483 "unmap": true, 00:14:57.483 "flush": true, 00:14:57.483 "reset": true, 00:14:57.483 "nvme_admin": false, 00:14:57.483 "nvme_io": false, 00:14:57.483 "nvme_io_md": false, 00:14:57.483 "write_zeroes": true, 00:14:57.483 "zcopy": true, 00:14:57.483 "get_zone_info": false, 00:14:57.483 "zone_management": false, 00:14:57.483 "zone_append": false, 00:14:57.483 "compare": false, 00:14:57.483 "compare_and_write": false, 00:14:57.483 "abort": true, 00:14:57.483 "seek_hole": false, 00:14:57.483 "seek_data": false, 00:14:57.483 "copy": true, 00:14:57.483 "nvme_iov_md": false 00:14:57.483 }, 00:14:57.483 "memory_domains": [ 00:14:57.483 { 00:14:57.483 "dma_device_id": "system", 00:14:57.483 "dma_device_type": 1 00:14:57.483 }, 00:14:57.483 { 00:14:57.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.483 "dma_device_type": 2 00:14:57.483 } 00:14:57.483 ], 00:14:57.483 "driver_specific": {} 00:14:57.483 }' 00:14:57.483 18:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:57.483 18:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:57.747 18:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:57.747 18:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:57.747 18:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:57.747 18:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:57.747 18:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:57.747 18:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:57.747 18:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:57.747 18:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:57.747 18:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:58.005 18:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:58.005 18:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:58.264 [2024-07-15 18:29:43.559304] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:58.264 [2024-07-15 18:29:43.559329] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:58.264 [2024-07-15 18:29:43.559380] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.264 [2024-07-15 18:29:43.559428] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:58.264 [2024-07-15 18:29:43.559437] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2577a60 name Existed_Raid, state offline 00:14:58.264 18:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2803348 00:14:58.264 18:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2803348 ']' 00:14:58.264 18:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2803348 00:14:58.264 18:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:14:58.264 18:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:58.264 18:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2803348 00:14:58.264 18:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:58.264 18:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:58.264 18:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2803348' 00:14:58.264 killing process with pid 2803348 00:14:58.264 18:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2803348 00:14:58.264 [2024-07-15 18:29:43.621334] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:58.264 18:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2803348 00:14:58.264 [2024-07-15 18:29:43.646553] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:58.523 18:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:14:58.523 00:14:58.523 real 0m29.542s 00:14:58.523 user 0m56.089s 00:14:58.523 sys 0m4.037s 00:14:58.523 18:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:58.523 18:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.523 ************************************ 00:14:58.523 END TEST raid_state_function_test_sb 00:14:58.523 ************************************ 00:14:58.523 18:29:43 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:58.523 18:29:43 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:14:58.523 18:29:43 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:58.523 18:29:43 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:58.523 18:29:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:58.523 ************************************ 00:14:58.523 START TEST raid_superblock_test 00:14:58.523 ************************************ 00:14:58.523 18:29:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 3 00:14:58.523 18:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:14:58.523 18:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:14:58.523 18:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:14:58.523 18:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:14:58.523 18:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:14:58.523 18:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:14:58.523 18:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:14:58.523 18:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:14:58.523 18:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:14:58.523 18:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:14:58.523 18:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:14:58.523 18:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:14:58.523 18:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:14:58.523 18:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:14:58.523 18:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:14:58.523 18:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:14:58.523 18:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2808433 00:14:58.523 18:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2808433 /var/tmp/spdk-raid.sock 00:14:58.523 18:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:58.523 18:29:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2808433 ']' 00:14:58.523 18:29:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:58.523 18:29:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:58.523 18:29:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:58.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:58.523 18:29:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:58.523 18:29:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.523 [2024-07-15 18:29:43.976572] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:14:58.523 [2024-07-15 18:29:43.976680] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2808433 ] 00:14:58.781 [2024-07-15 18:29:44.114784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.781 [2024-07-15 18:29:44.209521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.781 [2024-07-15 18:29:44.278369] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:58.782 [2024-07-15 18:29:44.278398] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.349 18:29:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:59.349 18:29:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:14:59.349 18:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:14:59.349 18:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:59.349 18:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:14:59.349 18:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:14:59.349 18:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:59.349 18:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:59.349 18:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:59.349 18:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:59.349 18:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:59.608 malloc1 00:14:59.608 18:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:59.867 [2024-07-15 18:29:45.392382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:59.867 [2024-07-15 18:29:45.392426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.867 [2024-07-15 18:29:45.392442] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d8ee20 00:14:59.867 [2024-07-15 18:29:45.392453] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.867 [2024-07-15 18:29:45.394237] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.867 [2024-07-15 18:29:45.394264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:59.867 pt1 00:14:59.867 18:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:59.867 18:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:59.867 18:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:14:59.867 18:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:14:59.867 18:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:59.867 18:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:59.867 18:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:59.867 18:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:59.867 18:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:00.126 malloc2 00:15:00.126 18:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:00.385 [2024-07-15 18:29:45.898531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:00.385 [2024-07-15 18:29:45.898575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.385 [2024-07-15 18:29:45.898589] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f38ed0 00:15:00.385 [2024-07-15 18:29:45.898599] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.385 [2024-07-15 18:29:45.900222] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.385 [2024-07-15 18:29:45.900248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:00.385 pt2 00:15:00.385 18:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:00.385 18:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:00.385 18:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:15:00.385 18:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:15:00.385 18:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:00.385 18:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:00.385 18:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:00.385 18:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:00.385 18:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:00.644 malloc3 00:15:00.644 18:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:00.903 [2024-07-15 18:29:46.404484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:00.903 [2024-07-15 18:29:46.404528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.903 [2024-07-15 18:29:46.404543] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f3ca30 00:15:00.903 [2024-07-15 18:29:46.404553] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.903 [2024-07-15 18:29:46.406182] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.903 [2024-07-15 18:29:46.406209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:00.903 pt3 00:15:00.903 18:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:00.903 18:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:00.903 18:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:15:01.162 [2024-07-15 18:29:46.657176] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:01.162 [2024-07-15 18:29:46.658537] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:01.162 [2024-07-15 18:29:46.658594] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:01.162 [2024-07-15 18:29:46.658747] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f3da40 00:15:01.162 [2024-07-15 18:29:46.658757] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:01.162 [2024-07-15 18:29:46.658970] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f38050 00:15:01.162 [2024-07-15 18:29:46.659116] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f3da40 00:15:01.162 [2024-07-15 18:29:46.659125] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f3da40 00:15:01.162 [2024-07-15 18:29:46.659224] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.162 18:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:01.162 18:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:01.162 18:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:01.162 18:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:01.162 18:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:01.162 18:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:01.162 18:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:01.162 18:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:01.162 18:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:01.162 18:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:01.162 18:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.162 18:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.421 18:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:01.421 "name": "raid_bdev1", 00:15:01.421 "uuid": "d5e69071-837b-4bba-94eb-633b7e785840", 00:15:01.421 "strip_size_kb": 64, 00:15:01.421 "state": "online", 00:15:01.421 "raid_level": "concat", 00:15:01.421 "superblock": true, 00:15:01.421 "num_base_bdevs": 3, 00:15:01.421 "num_base_bdevs_discovered": 3, 00:15:01.421 "num_base_bdevs_operational": 3, 00:15:01.421 "base_bdevs_list": [ 00:15:01.421 { 00:15:01.421 "name": "pt1", 00:15:01.421 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:01.421 "is_configured": true, 00:15:01.421 "data_offset": 2048, 00:15:01.421 "data_size": 63488 00:15:01.421 }, 00:15:01.421 { 00:15:01.421 "name": "pt2", 00:15:01.421 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:01.421 "is_configured": true, 00:15:01.421 "data_offset": 2048, 00:15:01.421 "data_size": 63488 00:15:01.421 }, 00:15:01.421 { 00:15:01.421 "name": "pt3", 00:15:01.421 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:01.421 "is_configured": true, 00:15:01.421 "data_offset": 2048, 00:15:01.421 "data_size": 63488 00:15:01.421 } 00:15:01.421 ] 00:15:01.421 }' 00:15:01.421 18:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:01.421 18:29:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.357 18:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:15:02.357 18:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:02.357 18:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:02.357 18:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:02.357 18:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:02.357 18:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:02.357 18:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:02.357 18:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:02.357 [2024-07-15 18:29:47.780458] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.357 18:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:02.357 "name": "raid_bdev1", 00:15:02.357 "aliases": [ 00:15:02.357 "d5e69071-837b-4bba-94eb-633b7e785840" 00:15:02.357 ], 00:15:02.357 "product_name": "Raid Volume", 00:15:02.357 "block_size": 512, 00:15:02.357 "num_blocks": 190464, 00:15:02.357 "uuid": "d5e69071-837b-4bba-94eb-633b7e785840", 00:15:02.357 "assigned_rate_limits": { 00:15:02.357 "rw_ios_per_sec": 0, 00:15:02.357 "rw_mbytes_per_sec": 0, 00:15:02.357 "r_mbytes_per_sec": 0, 00:15:02.357 "w_mbytes_per_sec": 0 00:15:02.357 }, 00:15:02.357 "claimed": false, 00:15:02.357 "zoned": false, 00:15:02.357 "supported_io_types": { 00:15:02.357 "read": true, 00:15:02.357 "write": true, 00:15:02.357 "unmap": true, 00:15:02.357 "flush": true, 00:15:02.357 "reset": true, 00:15:02.357 "nvme_admin": false, 00:15:02.357 "nvme_io": false, 00:15:02.357 "nvme_io_md": false, 00:15:02.357 "write_zeroes": true, 00:15:02.357 "zcopy": false, 00:15:02.357 "get_zone_info": false, 00:15:02.357 "zone_management": false, 00:15:02.357 "zone_append": false, 00:15:02.357 "compare": false, 00:15:02.357 "compare_and_write": false, 00:15:02.357 "abort": false, 00:15:02.357 "seek_hole": false, 00:15:02.357 "seek_data": false, 00:15:02.357 "copy": false, 00:15:02.357 "nvme_iov_md": false 00:15:02.357 }, 00:15:02.357 "memory_domains": [ 00:15:02.357 { 00:15:02.357 "dma_device_id": "system", 00:15:02.357 "dma_device_type": 1 00:15:02.357 }, 00:15:02.357 { 00:15:02.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.357 "dma_device_type": 2 00:15:02.357 }, 00:15:02.357 { 00:15:02.357 "dma_device_id": "system", 00:15:02.357 "dma_device_type": 1 00:15:02.357 }, 00:15:02.357 { 00:15:02.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.358 "dma_device_type": 2 00:15:02.358 }, 00:15:02.358 { 00:15:02.358 "dma_device_id": "system", 00:15:02.358 "dma_device_type": 1 00:15:02.358 }, 00:15:02.358 { 00:15:02.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.358 "dma_device_type": 2 00:15:02.358 } 00:15:02.358 ], 00:15:02.358 "driver_specific": { 00:15:02.358 "raid": { 00:15:02.358 "uuid": "d5e69071-837b-4bba-94eb-633b7e785840", 00:15:02.358 "strip_size_kb": 64, 00:15:02.358 "state": "online", 00:15:02.358 "raid_level": "concat", 00:15:02.358 "superblock": true, 00:15:02.358 "num_base_bdevs": 3, 00:15:02.358 "num_base_bdevs_discovered": 3, 00:15:02.358 "num_base_bdevs_operational": 3, 00:15:02.358 "base_bdevs_list": [ 00:15:02.358 { 00:15:02.358 "name": "pt1", 00:15:02.358 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:02.358 "is_configured": true, 00:15:02.358 "data_offset": 2048, 00:15:02.358 "data_size": 63488 00:15:02.358 }, 00:15:02.358 { 00:15:02.358 "name": "pt2", 00:15:02.358 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:02.358 "is_configured": true, 00:15:02.358 "data_offset": 2048, 00:15:02.358 "data_size": 63488 00:15:02.358 }, 00:15:02.358 { 00:15:02.358 "name": "pt3", 00:15:02.358 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:02.358 "is_configured": true, 00:15:02.358 "data_offset": 2048, 00:15:02.358 "data_size": 63488 00:15:02.358 } 00:15:02.358 ] 00:15:02.358 } 00:15:02.358 } 00:15:02.358 }' 00:15:02.358 18:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:02.358 18:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:02.358 pt2 00:15:02.358 pt3' 00:15:02.358 18:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:02.358 18:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:02.358 18:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:02.617 18:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:02.617 "name": "pt1", 00:15:02.617 "aliases": [ 00:15:02.617 "00000000-0000-0000-0000-000000000001" 00:15:02.617 ], 00:15:02.617 "product_name": "passthru", 00:15:02.617 "block_size": 512, 00:15:02.617 "num_blocks": 65536, 00:15:02.617 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:02.617 "assigned_rate_limits": { 00:15:02.617 "rw_ios_per_sec": 0, 00:15:02.617 "rw_mbytes_per_sec": 0, 00:15:02.617 "r_mbytes_per_sec": 0, 00:15:02.617 "w_mbytes_per_sec": 0 00:15:02.617 }, 00:15:02.617 "claimed": true, 00:15:02.617 "claim_type": "exclusive_write", 00:15:02.617 "zoned": false, 00:15:02.617 "supported_io_types": { 00:15:02.617 "read": true, 00:15:02.617 "write": true, 00:15:02.617 "unmap": true, 00:15:02.617 "flush": true, 00:15:02.617 "reset": true, 00:15:02.617 "nvme_admin": false, 00:15:02.617 "nvme_io": false, 00:15:02.617 "nvme_io_md": false, 00:15:02.617 "write_zeroes": true, 00:15:02.617 "zcopy": true, 00:15:02.617 "get_zone_info": false, 00:15:02.617 "zone_management": false, 00:15:02.617 "zone_append": false, 00:15:02.617 "compare": false, 00:15:02.617 "compare_and_write": false, 00:15:02.617 "abort": true, 00:15:02.617 "seek_hole": false, 00:15:02.617 "seek_data": false, 00:15:02.617 "copy": true, 00:15:02.617 "nvme_iov_md": false 00:15:02.617 }, 00:15:02.617 "memory_domains": [ 00:15:02.617 { 00:15:02.617 "dma_device_id": "system", 00:15:02.617 "dma_device_type": 1 00:15:02.617 }, 00:15:02.617 { 00:15:02.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.617 "dma_device_type": 2 00:15:02.617 } 00:15:02.617 ], 00:15:02.617 "driver_specific": { 00:15:02.617 "passthru": { 00:15:02.617 "name": "pt1", 00:15:02.617 "base_bdev_name": "malloc1" 00:15:02.617 } 00:15:02.617 } 00:15:02.617 }' 00:15:02.617 18:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:02.617 18:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:02.876 18:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:02.876 18:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:02.876 18:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:02.876 18:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:02.876 18:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:02.876 18:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:02.876 18:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:02.876 18:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:03.135 18:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:03.135 18:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:03.135 18:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:03.135 18:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:03.135 18:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:03.393 18:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:03.393 "name": "pt2", 00:15:03.393 "aliases": [ 00:15:03.393 "00000000-0000-0000-0000-000000000002" 00:15:03.393 ], 00:15:03.393 "product_name": "passthru", 00:15:03.393 "block_size": 512, 00:15:03.393 "num_blocks": 65536, 00:15:03.393 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:03.393 "assigned_rate_limits": { 00:15:03.393 "rw_ios_per_sec": 0, 00:15:03.393 "rw_mbytes_per_sec": 0, 00:15:03.393 "r_mbytes_per_sec": 0, 00:15:03.393 "w_mbytes_per_sec": 0 00:15:03.393 }, 00:15:03.393 "claimed": true, 00:15:03.393 "claim_type": "exclusive_write", 00:15:03.393 "zoned": false, 00:15:03.393 "supported_io_types": { 00:15:03.393 "read": true, 00:15:03.393 "write": true, 00:15:03.393 "unmap": true, 00:15:03.393 "flush": true, 00:15:03.393 "reset": true, 00:15:03.393 "nvme_admin": false, 00:15:03.393 "nvme_io": false, 00:15:03.393 "nvme_io_md": false, 00:15:03.393 "write_zeroes": true, 00:15:03.393 "zcopy": true, 00:15:03.393 "get_zone_info": false, 00:15:03.393 "zone_management": false, 00:15:03.393 "zone_append": false, 00:15:03.393 "compare": false, 00:15:03.393 "compare_and_write": false, 00:15:03.393 "abort": true, 00:15:03.393 "seek_hole": false, 00:15:03.393 "seek_data": false, 00:15:03.393 "copy": true, 00:15:03.393 "nvme_iov_md": false 00:15:03.393 }, 00:15:03.393 "memory_domains": [ 00:15:03.393 { 00:15:03.393 "dma_device_id": "system", 00:15:03.393 "dma_device_type": 1 00:15:03.393 }, 00:15:03.393 { 00:15:03.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.393 "dma_device_type": 2 00:15:03.393 } 00:15:03.393 ], 00:15:03.393 "driver_specific": { 00:15:03.393 "passthru": { 00:15:03.393 "name": "pt2", 00:15:03.393 "base_bdev_name": "malloc2" 00:15:03.393 } 00:15:03.393 } 00:15:03.393 }' 00:15:03.393 18:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:03.393 18:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:03.393 18:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:03.393 18:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:03.393 18:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:03.393 18:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:03.393 18:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:03.651 18:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:03.651 18:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:03.651 18:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:03.651 18:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:03.651 18:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:03.651 18:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:03.651 18:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:03.651 18:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:03.909 18:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:03.909 "name": "pt3", 00:15:03.909 "aliases": [ 00:15:03.909 "00000000-0000-0000-0000-000000000003" 00:15:03.909 ], 00:15:03.909 "product_name": "passthru", 00:15:03.910 "block_size": 512, 00:15:03.910 "num_blocks": 65536, 00:15:03.910 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:03.910 "assigned_rate_limits": { 00:15:03.910 "rw_ios_per_sec": 0, 00:15:03.910 "rw_mbytes_per_sec": 0, 00:15:03.910 "r_mbytes_per_sec": 0, 00:15:03.910 "w_mbytes_per_sec": 0 00:15:03.910 }, 00:15:03.910 "claimed": true, 00:15:03.910 "claim_type": "exclusive_write", 00:15:03.910 "zoned": false, 00:15:03.910 "supported_io_types": { 00:15:03.910 "read": true, 00:15:03.910 "write": true, 00:15:03.910 "unmap": true, 00:15:03.910 "flush": true, 00:15:03.910 "reset": true, 00:15:03.910 "nvme_admin": false, 00:15:03.910 "nvme_io": false, 00:15:03.910 "nvme_io_md": false, 00:15:03.910 "write_zeroes": true, 00:15:03.910 "zcopy": true, 00:15:03.910 "get_zone_info": false, 00:15:03.910 "zone_management": false, 00:15:03.910 "zone_append": false, 00:15:03.910 "compare": false, 00:15:03.910 "compare_and_write": false, 00:15:03.910 "abort": true, 00:15:03.910 "seek_hole": false, 00:15:03.910 "seek_data": false, 00:15:03.910 "copy": true, 00:15:03.910 "nvme_iov_md": false 00:15:03.910 }, 00:15:03.910 "memory_domains": [ 00:15:03.910 { 00:15:03.910 "dma_device_id": "system", 00:15:03.910 "dma_device_type": 1 00:15:03.910 }, 00:15:03.910 { 00:15:03.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.910 "dma_device_type": 2 00:15:03.910 } 00:15:03.910 ], 00:15:03.910 "driver_specific": { 00:15:03.910 "passthru": { 00:15:03.910 "name": "pt3", 00:15:03.910 "base_bdev_name": "malloc3" 00:15:03.910 } 00:15:03.910 } 00:15:03.910 }' 00:15:03.910 18:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:03.910 18:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:03.910 18:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:03.910 18:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:04.169 18:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:04.169 18:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:04.169 18:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:04.169 18:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:04.169 18:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:04.169 18:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:04.169 18:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:04.169 18:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:04.169 18:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:04.169 18:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:15:04.427 [2024-07-15 18:29:49.954308] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.427 18:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=d5e69071-837b-4bba-94eb-633b7e785840 00:15:04.427 18:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z d5e69071-837b-4bba-94eb-633b7e785840 ']' 00:15:04.427 18:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:04.686 [2024-07-15 18:29:50.214701] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:04.686 [2024-07-15 18:29:50.214724] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:04.686 [2024-07-15 18:29:50.214771] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:04.686 [2024-07-15 18:29:50.214824] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:04.686 [2024-07-15 18:29:50.214832] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f3da40 name raid_bdev1, state offline 00:15:04.686 18:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.686 18:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:15:04.943 18:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:15:04.943 18:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:15:04.943 18:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:04.943 18:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:05.201 18:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:05.201 18:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:05.459 18:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:05.459 18:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:05.717 18:29:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:05.717 18:29:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:05.975 18:29:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:15:05.975 18:29:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:05.975 18:29:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:15:05.975 18:29:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:05.975 18:29:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:05.975 18:29:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:05.975 18:29:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:05.975 18:29:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:05.975 18:29:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:05.975 18:29:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:05.975 18:29:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:05.975 18:29:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:15:05.975 18:29:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:06.234 [2024-07-15 18:29:51.730710] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:06.234 [2024-07-15 18:29:51.732125] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:06.234 [2024-07-15 18:29:51.732168] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:06.234 [2024-07-15 18:29:51.732211] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:06.234 [2024-07-15 18:29:51.732245] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:06.234 [2024-07-15 18:29:51.732264] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:06.234 [2024-07-15 18:29:51.732278] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:06.234 [2024-07-15 18:29:51.732291] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f39100 name raid_bdev1, state configuring 00:15:06.234 request: 00:15:06.234 { 00:15:06.234 "name": "raid_bdev1", 00:15:06.234 "raid_level": "concat", 00:15:06.234 "base_bdevs": [ 00:15:06.234 "malloc1", 00:15:06.234 "malloc2", 00:15:06.234 "malloc3" 00:15:06.234 ], 00:15:06.234 "strip_size_kb": 64, 00:15:06.234 "superblock": false, 00:15:06.234 "method": "bdev_raid_create", 00:15:06.234 "req_id": 1 00:15:06.234 } 00:15:06.234 Got JSON-RPC error response 00:15:06.234 response: 00:15:06.234 { 00:15:06.234 "code": -17, 00:15:06.234 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:06.234 } 00:15:06.234 18:29:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:15:06.234 18:29:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:06.234 18:29:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:06.234 18:29:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:06.234 18:29:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.234 18:29:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:15:06.493 18:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:15:06.493 18:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:15:06.493 18:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:06.752 [2024-07-15 18:29:52.244028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:06.752 [2024-07-15 18:29:52.244070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.752 [2024-07-15 18:29:52.244085] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d8fa40 00:15:06.752 [2024-07-15 18:29:52.244094] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.752 [2024-07-15 18:29:52.245783] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.752 [2024-07-15 18:29:52.245809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:06.752 [2024-07-15 18:29:52.245869] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:06.752 [2024-07-15 18:29:52.245895] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:06.752 pt1 00:15:06.752 18:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:15:06.752 18:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:06.752 18:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:06.752 18:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:06.752 18:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:06.752 18:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:06.752 18:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:06.752 18:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:06.752 18:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:06.752 18:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:06.752 18:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.752 18:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.010 18:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:07.010 "name": "raid_bdev1", 00:15:07.010 "uuid": "d5e69071-837b-4bba-94eb-633b7e785840", 00:15:07.010 "strip_size_kb": 64, 00:15:07.010 "state": "configuring", 00:15:07.010 "raid_level": "concat", 00:15:07.010 "superblock": true, 00:15:07.010 "num_base_bdevs": 3, 00:15:07.010 "num_base_bdevs_discovered": 1, 00:15:07.010 "num_base_bdevs_operational": 3, 00:15:07.010 "base_bdevs_list": [ 00:15:07.010 { 00:15:07.010 "name": "pt1", 00:15:07.010 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:07.010 "is_configured": true, 00:15:07.010 "data_offset": 2048, 00:15:07.010 "data_size": 63488 00:15:07.010 }, 00:15:07.010 { 00:15:07.010 "name": null, 00:15:07.010 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:07.010 "is_configured": false, 00:15:07.010 "data_offset": 2048, 00:15:07.010 "data_size": 63488 00:15:07.010 }, 00:15:07.010 { 00:15:07.010 "name": null, 00:15:07.010 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:07.010 "is_configured": false, 00:15:07.010 "data_offset": 2048, 00:15:07.010 "data_size": 63488 00:15:07.010 } 00:15:07.010 ] 00:15:07.010 }' 00:15:07.010 18:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:07.010 18:29:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.954 18:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:15:07.954 18:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:07.954 [2024-07-15 18:29:53.375080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:07.954 [2024-07-15 18:29:53.375126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.954 [2024-07-15 18:29:53.375143] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d8f2c0 00:15:07.954 [2024-07-15 18:29:53.375153] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.954 [2024-07-15 18:29:53.375480] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.954 [2024-07-15 18:29:53.375494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:07.954 [2024-07-15 18:29:53.375551] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:07.954 [2024-07-15 18:29:53.375567] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:07.954 pt2 00:15:07.954 18:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:08.220 [2024-07-15 18:29:53.635791] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:08.220 18:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:15:08.220 18:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:08.220 18:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:08.220 18:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:08.220 18:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:08.220 18:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:08.220 18:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:08.220 18:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:08.220 18:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:08.220 18:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:08.220 18:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.220 18:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.477 18:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:08.477 "name": "raid_bdev1", 00:15:08.477 "uuid": "d5e69071-837b-4bba-94eb-633b7e785840", 00:15:08.477 "strip_size_kb": 64, 00:15:08.477 "state": "configuring", 00:15:08.477 "raid_level": "concat", 00:15:08.477 "superblock": true, 00:15:08.477 "num_base_bdevs": 3, 00:15:08.477 "num_base_bdevs_discovered": 1, 00:15:08.477 "num_base_bdevs_operational": 3, 00:15:08.477 "base_bdevs_list": [ 00:15:08.477 { 00:15:08.477 "name": "pt1", 00:15:08.477 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:08.477 "is_configured": true, 00:15:08.477 "data_offset": 2048, 00:15:08.477 "data_size": 63488 00:15:08.477 }, 00:15:08.477 { 00:15:08.477 "name": null, 00:15:08.477 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:08.477 "is_configured": false, 00:15:08.477 "data_offset": 2048, 00:15:08.477 "data_size": 63488 00:15:08.477 }, 00:15:08.477 { 00:15:08.477 "name": null, 00:15:08.477 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:08.477 "is_configured": false, 00:15:08.477 "data_offset": 2048, 00:15:08.477 "data_size": 63488 00:15:08.477 } 00:15:08.477 ] 00:15:08.477 }' 00:15:08.477 18:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:08.477 18:29:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.042 18:29:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:15:09.042 18:29:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:09.042 18:29:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:09.301 [2024-07-15 18:29:54.770993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:09.301 [2024-07-15 18:29:54.771040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.301 [2024-07-15 18:29:54.771057] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f3a2e0 00:15:09.301 [2024-07-15 18:29:54.771066] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.301 [2024-07-15 18:29:54.771395] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.301 [2024-07-15 18:29:54.771410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:09.301 [2024-07-15 18:29:54.771464] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:09.301 [2024-07-15 18:29:54.771481] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:09.301 pt2 00:15:09.301 18:29:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:09.301 18:29:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:09.301 18:29:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:09.606 [2024-07-15 18:29:55.031696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:09.606 [2024-07-15 18:29:55.031731] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.606 [2024-07-15 18:29:55.031743] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f3b6d0 00:15:09.606 [2024-07-15 18:29:55.031752] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.606 [2024-07-15 18:29:55.032052] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.606 [2024-07-15 18:29:55.032068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:09.606 [2024-07-15 18:29:55.032116] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:09.606 [2024-07-15 18:29:55.032131] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:09.606 [2024-07-15 18:29:55.032235] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f3bfd0 00:15:09.606 [2024-07-15 18:29:55.032244] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:09.606 [2024-07-15 18:29:55.032421] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f3d0f0 00:15:09.606 [2024-07-15 18:29:55.032549] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f3bfd0 00:15:09.606 [2024-07-15 18:29:55.032556] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f3bfd0 00:15:09.606 [2024-07-15 18:29:55.032655] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.606 pt3 00:15:09.606 18:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:09.606 18:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:09.606 18:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:09.606 18:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:09.606 18:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:09.606 18:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:09.606 18:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:09.606 18:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:09.606 18:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:09.606 18:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:09.606 18:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:09.606 18:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:09.606 18:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.606 18:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.877 18:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:09.877 "name": "raid_bdev1", 00:15:09.877 "uuid": "d5e69071-837b-4bba-94eb-633b7e785840", 00:15:09.877 "strip_size_kb": 64, 00:15:09.877 "state": "online", 00:15:09.877 "raid_level": "concat", 00:15:09.877 "superblock": true, 00:15:09.877 "num_base_bdevs": 3, 00:15:09.877 "num_base_bdevs_discovered": 3, 00:15:09.877 "num_base_bdevs_operational": 3, 00:15:09.877 "base_bdevs_list": [ 00:15:09.877 { 00:15:09.877 "name": "pt1", 00:15:09.877 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:09.877 "is_configured": true, 00:15:09.877 "data_offset": 2048, 00:15:09.877 "data_size": 63488 00:15:09.877 }, 00:15:09.877 { 00:15:09.877 "name": "pt2", 00:15:09.877 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:09.877 "is_configured": true, 00:15:09.877 "data_offset": 2048, 00:15:09.877 "data_size": 63488 00:15:09.877 }, 00:15:09.877 { 00:15:09.877 "name": "pt3", 00:15:09.877 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:09.877 "is_configured": true, 00:15:09.877 "data_offset": 2048, 00:15:09.877 "data_size": 63488 00:15:09.877 } 00:15:09.877 ] 00:15:09.877 }' 00:15:09.877 18:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:09.877 18:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.443 18:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:15:10.443 18:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:10.443 18:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:10.443 18:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:10.443 18:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:10.443 18:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:10.443 18:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:10.443 18:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:10.701 [2024-07-15 18:29:56.183089] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:10.702 18:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:10.702 "name": "raid_bdev1", 00:15:10.702 "aliases": [ 00:15:10.702 "d5e69071-837b-4bba-94eb-633b7e785840" 00:15:10.702 ], 00:15:10.702 "product_name": "Raid Volume", 00:15:10.702 "block_size": 512, 00:15:10.702 "num_blocks": 190464, 00:15:10.702 "uuid": "d5e69071-837b-4bba-94eb-633b7e785840", 00:15:10.702 "assigned_rate_limits": { 00:15:10.702 "rw_ios_per_sec": 0, 00:15:10.702 "rw_mbytes_per_sec": 0, 00:15:10.702 "r_mbytes_per_sec": 0, 00:15:10.702 "w_mbytes_per_sec": 0 00:15:10.702 }, 00:15:10.702 "claimed": false, 00:15:10.702 "zoned": false, 00:15:10.702 "supported_io_types": { 00:15:10.702 "read": true, 00:15:10.702 "write": true, 00:15:10.702 "unmap": true, 00:15:10.702 "flush": true, 00:15:10.702 "reset": true, 00:15:10.702 "nvme_admin": false, 00:15:10.702 "nvme_io": false, 00:15:10.702 "nvme_io_md": false, 00:15:10.702 "write_zeroes": true, 00:15:10.702 "zcopy": false, 00:15:10.702 "get_zone_info": false, 00:15:10.702 "zone_management": false, 00:15:10.702 "zone_append": false, 00:15:10.702 "compare": false, 00:15:10.702 "compare_and_write": false, 00:15:10.702 "abort": false, 00:15:10.702 "seek_hole": false, 00:15:10.702 "seek_data": false, 00:15:10.702 "copy": false, 00:15:10.702 "nvme_iov_md": false 00:15:10.702 }, 00:15:10.702 "memory_domains": [ 00:15:10.702 { 00:15:10.702 "dma_device_id": "system", 00:15:10.702 "dma_device_type": 1 00:15:10.702 }, 00:15:10.702 { 00:15:10.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.702 "dma_device_type": 2 00:15:10.702 }, 00:15:10.702 { 00:15:10.702 "dma_device_id": "system", 00:15:10.702 "dma_device_type": 1 00:15:10.702 }, 00:15:10.702 { 00:15:10.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.702 "dma_device_type": 2 00:15:10.702 }, 00:15:10.702 { 00:15:10.702 "dma_device_id": "system", 00:15:10.702 "dma_device_type": 1 00:15:10.702 }, 00:15:10.702 { 00:15:10.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.702 "dma_device_type": 2 00:15:10.702 } 00:15:10.702 ], 00:15:10.702 "driver_specific": { 00:15:10.702 "raid": { 00:15:10.702 "uuid": "d5e69071-837b-4bba-94eb-633b7e785840", 00:15:10.702 "strip_size_kb": 64, 00:15:10.702 "state": "online", 00:15:10.702 "raid_level": "concat", 00:15:10.702 "superblock": true, 00:15:10.702 "num_base_bdevs": 3, 00:15:10.702 "num_base_bdevs_discovered": 3, 00:15:10.702 "num_base_bdevs_operational": 3, 00:15:10.702 "base_bdevs_list": [ 00:15:10.702 { 00:15:10.702 "name": "pt1", 00:15:10.702 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:10.702 "is_configured": true, 00:15:10.702 "data_offset": 2048, 00:15:10.702 "data_size": 63488 00:15:10.702 }, 00:15:10.702 { 00:15:10.702 "name": "pt2", 00:15:10.702 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.702 "is_configured": true, 00:15:10.702 "data_offset": 2048, 00:15:10.702 "data_size": 63488 00:15:10.702 }, 00:15:10.702 { 00:15:10.702 "name": "pt3", 00:15:10.702 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:10.702 "is_configured": true, 00:15:10.702 "data_offset": 2048, 00:15:10.702 "data_size": 63488 00:15:10.702 } 00:15:10.702 ] 00:15:10.702 } 00:15:10.702 } 00:15:10.702 }' 00:15:10.702 18:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:10.702 18:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:10.702 pt2 00:15:10.702 pt3' 00:15:10.702 18:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:10.972 18:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:10.972 18:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:10.972 18:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:10.972 "name": "pt1", 00:15:10.972 "aliases": [ 00:15:10.972 "00000000-0000-0000-0000-000000000001" 00:15:10.972 ], 00:15:10.972 "product_name": "passthru", 00:15:10.972 "block_size": 512, 00:15:10.972 "num_blocks": 65536, 00:15:10.972 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:10.972 "assigned_rate_limits": { 00:15:10.972 "rw_ios_per_sec": 0, 00:15:10.972 "rw_mbytes_per_sec": 0, 00:15:10.972 "r_mbytes_per_sec": 0, 00:15:10.972 "w_mbytes_per_sec": 0 00:15:10.972 }, 00:15:10.972 "claimed": true, 00:15:10.972 "claim_type": "exclusive_write", 00:15:10.972 "zoned": false, 00:15:10.972 "supported_io_types": { 00:15:10.972 "read": true, 00:15:10.972 "write": true, 00:15:10.972 "unmap": true, 00:15:10.972 "flush": true, 00:15:10.972 "reset": true, 00:15:10.972 "nvme_admin": false, 00:15:10.972 "nvme_io": false, 00:15:10.972 "nvme_io_md": false, 00:15:10.972 "write_zeroes": true, 00:15:10.972 "zcopy": true, 00:15:10.972 "get_zone_info": false, 00:15:10.972 "zone_management": false, 00:15:10.972 "zone_append": false, 00:15:10.972 "compare": false, 00:15:10.972 "compare_and_write": false, 00:15:10.972 "abort": true, 00:15:10.972 "seek_hole": false, 00:15:10.972 "seek_data": false, 00:15:10.972 "copy": true, 00:15:10.972 "nvme_iov_md": false 00:15:10.972 }, 00:15:10.972 "memory_domains": [ 00:15:10.972 { 00:15:10.972 "dma_device_id": "system", 00:15:10.972 "dma_device_type": 1 00:15:10.972 }, 00:15:10.972 { 00:15:10.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.972 "dma_device_type": 2 00:15:10.972 } 00:15:10.972 ], 00:15:10.972 "driver_specific": { 00:15:10.972 "passthru": { 00:15:10.972 "name": "pt1", 00:15:10.972 "base_bdev_name": "malloc1" 00:15:10.972 } 00:15:10.972 } 00:15:10.972 }' 00:15:10.972 18:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:11.231 18:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:11.231 18:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:11.231 18:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:11.231 18:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:11.231 18:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:11.231 18:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:11.231 18:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:11.490 18:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:11.490 18:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:11.490 18:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:11.490 18:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:11.490 18:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:11.490 18:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:11.490 18:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:11.749 18:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:11.749 "name": "pt2", 00:15:11.749 "aliases": [ 00:15:11.749 "00000000-0000-0000-0000-000000000002" 00:15:11.749 ], 00:15:11.749 "product_name": "passthru", 00:15:11.749 "block_size": 512, 00:15:11.749 "num_blocks": 65536, 00:15:11.749 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:11.749 "assigned_rate_limits": { 00:15:11.749 "rw_ios_per_sec": 0, 00:15:11.749 "rw_mbytes_per_sec": 0, 00:15:11.749 "r_mbytes_per_sec": 0, 00:15:11.749 "w_mbytes_per_sec": 0 00:15:11.749 }, 00:15:11.749 "claimed": true, 00:15:11.749 "claim_type": "exclusive_write", 00:15:11.749 "zoned": false, 00:15:11.749 "supported_io_types": { 00:15:11.749 "read": true, 00:15:11.749 "write": true, 00:15:11.749 "unmap": true, 00:15:11.749 "flush": true, 00:15:11.749 "reset": true, 00:15:11.749 "nvme_admin": false, 00:15:11.749 "nvme_io": false, 00:15:11.749 "nvme_io_md": false, 00:15:11.749 "write_zeroes": true, 00:15:11.749 "zcopy": true, 00:15:11.749 "get_zone_info": false, 00:15:11.749 "zone_management": false, 00:15:11.749 "zone_append": false, 00:15:11.749 "compare": false, 00:15:11.749 "compare_and_write": false, 00:15:11.749 "abort": true, 00:15:11.749 "seek_hole": false, 00:15:11.749 "seek_data": false, 00:15:11.749 "copy": true, 00:15:11.749 "nvme_iov_md": false 00:15:11.749 }, 00:15:11.749 "memory_domains": [ 00:15:11.749 { 00:15:11.749 "dma_device_id": "system", 00:15:11.749 "dma_device_type": 1 00:15:11.749 }, 00:15:11.749 { 00:15:11.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.749 "dma_device_type": 2 00:15:11.749 } 00:15:11.749 ], 00:15:11.749 "driver_specific": { 00:15:11.749 "passthru": { 00:15:11.749 "name": "pt2", 00:15:11.749 "base_bdev_name": "malloc2" 00:15:11.749 } 00:15:11.749 } 00:15:11.749 }' 00:15:11.749 18:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:11.749 18:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:11.749 18:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:11.749 18:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:11.749 18:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:12.008 18:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:12.008 18:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:12.008 18:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:12.008 18:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:12.008 18:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:12.008 18:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:12.008 18:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:12.008 18:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:12.008 18:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:12.008 18:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:12.267 18:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:12.267 "name": "pt3", 00:15:12.267 "aliases": [ 00:15:12.267 "00000000-0000-0000-0000-000000000003" 00:15:12.267 ], 00:15:12.267 "product_name": "passthru", 00:15:12.267 "block_size": 512, 00:15:12.267 "num_blocks": 65536, 00:15:12.267 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:12.267 "assigned_rate_limits": { 00:15:12.267 "rw_ios_per_sec": 0, 00:15:12.267 "rw_mbytes_per_sec": 0, 00:15:12.267 "r_mbytes_per_sec": 0, 00:15:12.267 "w_mbytes_per_sec": 0 00:15:12.267 }, 00:15:12.267 "claimed": true, 00:15:12.267 "claim_type": "exclusive_write", 00:15:12.267 "zoned": false, 00:15:12.267 "supported_io_types": { 00:15:12.267 "read": true, 00:15:12.267 "write": true, 00:15:12.267 "unmap": true, 00:15:12.267 "flush": true, 00:15:12.267 "reset": true, 00:15:12.267 "nvme_admin": false, 00:15:12.267 "nvme_io": false, 00:15:12.267 "nvme_io_md": false, 00:15:12.267 "write_zeroes": true, 00:15:12.267 "zcopy": true, 00:15:12.267 "get_zone_info": false, 00:15:12.267 "zone_management": false, 00:15:12.267 "zone_append": false, 00:15:12.267 "compare": false, 00:15:12.267 "compare_and_write": false, 00:15:12.267 "abort": true, 00:15:12.267 "seek_hole": false, 00:15:12.267 "seek_data": false, 00:15:12.267 "copy": true, 00:15:12.267 "nvme_iov_md": false 00:15:12.267 }, 00:15:12.267 "memory_domains": [ 00:15:12.267 { 00:15:12.267 "dma_device_id": "system", 00:15:12.267 "dma_device_type": 1 00:15:12.267 }, 00:15:12.267 { 00:15:12.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.267 "dma_device_type": 2 00:15:12.267 } 00:15:12.267 ], 00:15:12.267 "driver_specific": { 00:15:12.267 "passthru": { 00:15:12.267 "name": "pt3", 00:15:12.267 "base_bdev_name": "malloc3" 00:15:12.267 } 00:15:12.267 } 00:15:12.267 }' 00:15:12.267 18:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:12.526 18:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:12.526 18:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:12.526 18:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:12.526 18:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:12.526 18:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:12.526 18:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:12.526 18:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:12.526 18:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:12.526 18:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:12.811 18:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:12.811 18:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:12.811 18:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:12.811 18:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:15:13.070 [2024-07-15 18:29:58.369017] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:13.070 18:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' d5e69071-837b-4bba-94eb-633b7e785840 '!=' d5e69071-837b-4bba-94eb-633b7e785840 ']' 00:15:13.070 18:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:15:13.070 18:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:13.070 18:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:13.070 18:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2808433 00:15:13.070 18:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2808433 ']' 00:15:13.070 18:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2808433 00:15:13.070 18:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:15:13.070 18:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:13.070 18:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2808433 00:15:13.070 18:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:13.070 18:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:13.070 18:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2808433' 00:15:13.070 killing process with pid 2808433 00:15:13.070 18:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2808433 00:15:13.070 [2024-07-15 18:29:58.438226] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:13.070 [2024-07-15 18:29:58.438278] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:13.070 [2024-07-15 18:29:58.438333] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:13.070 [2024-07-15 18:29:58.438342] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f3bfd0 name raid_bdev1, state offline 00:15:13.070 18:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2808433 00:15:13.070 [2024-07-15 18:29:58.464247] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:13.329 18:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:15:13.329 00:15:13.329 real 0m14.777s 00:15:13.329 user 0m27.259s 00:15:13.329 sys 0m2.090s 00:15:13.329 18:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:13.330 18:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.330 ************************************ 00:15:13.330 END TEST raid_superblock_test 00:15:13.330 ************************************ 00:15:13.330 18:29:58 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:13.330 18:29:58 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:15:13.330 18:29:58 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:13.330 18:29:58 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:13.330 18:29:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:13.330 ************************************ 00:15:13.330 START TEST raid_read_error_test 00:15:13.330 ************************************ 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 read 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.ELuVtfJoqA 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2811061 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2811061 /var/tmp/spdk-raid.sock 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2811061 ']' 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:13.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:13.330 18:29:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.330 [2024-07-15 18:29:58.807885] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:15:13.330 [2024-07-15 18:29:58.808011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2811061 ] 00:15:13.589 [2024-07-15 18:29:58.945288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.589 [2024-07-15 18:29:59.039792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.589 [2024-07-15 18:29:59.098791] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:13.589 [2024-07-15 18:29:59.098822] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.523 18:29:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:14.523 18:29:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:15:14.523 18:29:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:14.523 18:29:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:14.523 BaseBdev1_malloc 00:15:14.523 18:29:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:14.781 true 00:15:14.781 18:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:15.040 [2024-07-15 18:30:00.469044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:15.040 [2024-07-15 18:30:00.469087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.040 [2024-07-15 18:30:00.469106] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xcb9d20 00:15:15.040 [2024-07-15 18:30:00.469116] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.040 [2024-07-15 18:30:00.470940] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.040 [2024-07-15 18:30:00.470978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:15.040 BaseBdev1 00:15:15.040 18:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:15.040 18:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:15.299 BaseBdev2_malloc 00:15:15.299 18:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:15.556 true 00:15:15.556 18:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:15.815 [2024-07-15 18:30:01.227564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:15.815 [2024-07-15 18:30:01.227603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.815 [2024-07-15 18:30:01.227622] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xcbed50 00:15:15.815 [2024-07-15 18:30:01.227632] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.815 [2024-07-15 18:30:01.229260] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.815 [2024-07-15 18:30:01.229290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:15.815 BaseBdev2 00:15:15.815 18:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:15.815 18:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:16.073 BaseBdev3_malloc 00:15:16.073 18:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:15:16.331 true 00:15:16.331 18:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:16.589 [2024-07-15 18:30:01.994034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:16.589 [2024-07-15 18:30:01.994077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.589 [2024-07-15 18:30:01.994096] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xcbdef0 00:15:16.589 [2024-07-15 18:30:01.994105] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.589 [2024-07-15 18:30:01.995774] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.589 [2024-07-15 18:30:01.995799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:16.589 BaseBdev3 00:15:16.589 18:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:15:16.847 [2024-07-15 18:30:02.246739] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:16.847 [2024-07-15 18:30:02.248129] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:16.847 [2024-07-15 18:30:02.248198] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:16.847 [2024-07-15 18:30:02.248408] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xcc1a00 00:15:16.847 [2024-07-15 18:30:02.248418] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:16.847 [2024-07-15 18:30:02.248619] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb15790 00:15:16.847 [2024-07-15 18:30:02.248776] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xcc1a00 00:15:16.847 [2024-07-15 18:30:02.248784] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xcc1a00 00:15:16.847 [2024-07-15 18:30:02.248893] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.847 18:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:16.847 18:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:16.847 18:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:16.848 18:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:16.848 18:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:16.848 18:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:16.848 18:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:16.848 18:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:16.848 18:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:16.848 18:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:16.848 18:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.848 18:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.128 18:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:17.128 "name": "raid_bdev1", 00:15:17.128 "uuid": "189f5a34-ffd9-4150-8ac4-d52f4bccb96b", 00:15:17.128 "strip_size_kb": 64, 00:15:17.128 "state": "online", 00:15:17.128 "raid_level": "concat", 00:15:17.128 "superblock": true, 00:15:17.128 "num_base_bdevs": 3, 00:15:17.128 "num_base_bdevs_discovered": 3, 00:15:17.128 "num_base_bdevs_operational": 3, 00:15:17.128 "base_bdevs_list": [ 00:15:17.128 { 00:15:17.128 "name": "BaseBdev1", 00:15:17.128 "uuid": "4c7d461e-92df-566b-97a4-498e54abcb4d", 00:15:17.128 "is_configured": true, 00:15:17.128 "data_offset": 2048, 00:15:17.128 "data_size": 63488 00:15:17.128 }, 00:15:17.128 { 00:15:17.128 "name": "BaseBdev2", 00:15:17.128 "uuid": "95a8977b-395e-5b6f-a8e0-dce2a76d79ea", 00:15:17.128 "is_configured": true, 00:15:17.128 "data_offset": 2048, 00:15:17.128 "data_size": 63488 00:15:17.128 }, 00:15:17.128 { 00:15:17.128 "name": "BaseBdev3", 00:15:17.128 "uuid": "480a2c8d-a432-586d-9bc5-3a5a32ee9ce8", 00:15:17.128 "is_configured": true, 00:15:17.128 "data_offset": 2048, 00:15:17.128 "data_size": 63488 00:15:17.128 } 00:15:17.128 ] 00:15:17.128 }' 00:15:17.128 18:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:17.128 18:30:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.694 18:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:17.694 18:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:17.952 [2024-07-15 18:30:03.261719] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xcc1930 00:15:18.886 18:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:18.886 18:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:18.886 18:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:15:18.886 18:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:15:18.886 18:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:18.886 18:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:18.886 18:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:18.886 18:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:18.886 18:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:18.886 18:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:18.886 18:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:18.886 18:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:18.886 18:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:18.886 18:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:18.886 18:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.886 18:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.144 18:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:19.144 "name": "raid_bdev1", 00:15:19.144 "uuid": "189f5a34-ffd9-4150-8ac4-d52f4bccb96b", 00:15:19.144 "strip_size_kb": 64, 00:15:19.144 "state": "online", 00:15:19.144 "raid_level": "concat", 00:15:19.144 "superblock": true, 00:15:19.144 "num_base_bdevs": 3, 00:15:19.144 "num_base_bdevs_discovered": 3, 00:15:19.144 "num_base_bdevs_operational": 3, 00:15:19.144 "base_bdevs_list": [ 00:15:19.144 { 00:15:19.144 "name": "BaseBdev1", 00:15:19.144 "uuid": "4c7d461e-92df-566b-97a4-498e54abcb4d", 00:15:19.144 "is_configured": true, 00:15:19.144 "data_offset": 2048, 00:15:19.144 "data_size": 63488 00:15:19.144 }, 00:15:19.144 { 00:15:19.144 "name": "BaseBdev2", 00:15:19.144 "uuid": "95a8977b-395e-5b6f-a8e0-dce2a76d79ea", 00:15:19.144 "is_configured": true, 00:15:19.144 "data_offset": 2048, 00:15:19.144 "data_size": 63488 00:15:19.144 }, 00:15:19.144 { 00:15:19.144 "name": "BaseBdev3", 00:15:19.144 "uuid": "480a2c8d-a432-586d-9bc5-3a5a32ee9ce8", 00:15:19.144 "is_configured": true, 00:15:19.144 "data_offset": 2048, 00:15:19.144 "data_size": 63488 00:15:19.144 } 00:15:19.144 ] 00:15:19.144 }' 00:15:19.144 18:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:19.144 18:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.077 18:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:20.077 [2024-07-15 18:30:05.542402] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:20.077 [2024-07-15 18:30:05.542442] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:20.077 [2024-07-15 18:30:05.545870] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:20.077 [2024-07-15 18:30:05.545906] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.077 [2024-07-15 18:30:05.545938] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:20.077 [2024-07-15 18:30:05.545946] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xcc1a00 name raid_bdev1, state offline 00:15:20.077 0 00:15:20.077 18:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2811061 00:15:20.077 18:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2811061 ']' 00:15:20.077 18:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2811061 00:15:20.077 18:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:15:20.077 18:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:20.077 18:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2811061 00:15:20.077 18:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:20.077 18:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:20.077 18:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2811061' 00:15:20.077 killing process with pid 2811061 00:15:20.077 18:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2811061 00:15:20.077 [2024-07-15 18:30:05.621838] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:20.077 18:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2811061 00:15:20.335 [2024-07-15 18:30:05.641764] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:20.335 18:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.ELuVtfJoqA 00:15:20.335 18:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:20.335 18:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:20.335 18:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.44 00:15:20.335 18:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:15:20.335 18:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:20.335 18:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:20.335 18:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.44 != \0\.\0\0 ]] 00:15:20.335 00:15:20.335 real 0m7.150s 00:15:20.335 user 0m11.689s 00:15:20.335 sys 0m0.976s 00:15:20.335 18:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:20.335 18:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.335 ************************************ 00:15:20.335 END TEST raid_read_error_test 00:15:20.335 ************************************ 00:15:20.335 18:30:05 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:20.335 18:30:05 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:15:20.335 18:30:05 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:20.335 18:30:05 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:20.335 18:30:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:20.594 ************************************ 00:15:20.594 START TEST raid_write_error_test 00:15:20.594 ************************************ 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 write 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.dGvHuk2FMX 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2812349 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2812349 /var/tmp/spdk-raid.sock 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2812349 ']' 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:20.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:20.594 18:30:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.594 [2024-07-15 18:30:05.966837] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:15:20.594 [2024-07-15 18:30:05.966900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2812349 ] 00:15:20.594 [2024-07-15 18:30:06.064696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.852 [2024-07-15 18:30:06.160674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.852 [2024-07-15 18:30:06.221635] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:20.852 [2024-07-15 18:30:06.221666] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:21.418 18:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:21.418 18:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:15:21.418 18:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:21.418 18:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:21.676 BaseBdev1_malloc 00:15:21.676 18:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:21.933 true 00:15:21.934 18:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:22.192 [2024-07-15 18:30:07.592166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:22.192 [2024-07-15 18:30:07.592208] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.192 [2024-07-15 18:30:07.592225] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14ced20 00:15:22.192 [2024-07-15 18:30:07.592234] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.192 [2024-07-15 18:30:07.594041] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.192 [2024-07-15 18:30:07.594068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:22.192 BaseBdev1 00:15:22.192 18:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:22.192 18:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:22.454 BaseBdev2_malloc 00:15:22.455 18:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:22.713 true 00:15:22.713 18:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:22.970 [2024-07-15 18:30:08.350801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:22.970 [2024-07-15 18:30:08.350841] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.970 [2024-07-15 18:30:08.350859] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14d3d50 00:15:22.970 [2024-07-15 18:30:08.350868] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.970 [2024-07-15 18:30:08.352684] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.970 [2024-07-15 18:30:08.352712] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:22.970 BaseBdev2 00:15:22.970 18:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:22.970 18:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:23.227 BaseBdev3_malloc 00:15:23.227 18:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:15:23.486 true 00:15:23.486 18:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:23.744 [2024-07-15 18:30:09.117433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:23.744 [2024-07-15 18:30:09.117474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.744 [2024-07-15 18:30:09.117493] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14d2ef0 00:15:23.744 [2024-07-15 18:30:09.117502] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.744 [2024-07-15 18:30:09.119165] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.744 [2024-07-15 18:30:09.119192] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:23.744 BaseBdev3 00:15:23.744 18:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:15:24.012 [2024-07-15 18:30:09.370145] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.012 [2024-07-15 18:30:09.371529] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:24.012 [2024-07-15 18:30:09.371601] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:24.012 [2024-07-15 18:30:09.371810] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x14d6a00 00:15:24.012 [2024-07-15 18:30:09.371821] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:24.012 [2024-07-15 18:30:09.372031] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x132a790 00:15:24.012 [2024-07-15 18:30:09.372188] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x14d6a00 00:15:24.012 [2024-07-15 18:30:09.372197] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x14d6a00 00:15:24.012 [2024-07-15 18:30:09.372305] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.012 18:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:24.012 18:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:24.012 18:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:24.012 18:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:24.012 18:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:24.012 18:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:24.012 18:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:24.013 18:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:24.013 18:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:24.013 18:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:24.013 18:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.013 18:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.334 18:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:24.334 "name": "raid_bdev1", 00:15:24.334 "uuid": "c1245a1b-49d9-404b-adc1-309ff0b59541", 00:15:24.334 "strip_size_kb": 64, 00:15:24.334 "state": "online", 00:15:24.334 "raid_level": "concat", 00:15:24.334 "superblock": true, 00:15:24.334 "num_base_bdevs": 3, 00:15:24.334 "num_base_bdevs_discovered": 3, 00:15:24.334 "num_base_bdevs_operational": 3, 00:15:24.334 "base_bdevs_list": [ 00:15:24.334 { 00:15:24.334 "name": "BaseBdev1", 00:15:24.334 "uuid": "9a031d1a-fae7-5820-a2c6-a00d15c58c12", 00:15:24.334 "is_configured": true, 00:15:24.334 "data_offset": 2048, 00:15:24.334 "data_size": 63488 00:15:24.334 }, 00:15:24.334 { 00:15:24.334 "name": "BaseBdev2", 00:15:24.334 "uuid": "fab88016-4a3c-54f6-a1bd-394ce29d62f3", 00:15:24.334 "is_configured": true, 00:15:24.334 "data_offset": 2048, 00:15:24.334 "data_size": 63488 00:15:24.334 }, 00:15:24.334 { 00:15:24.334 "name": "BaseBdev3", 00:15:24.334 "uuid": "b833e67b-7b15-5aa3-ab2a-805583d2af60", 00:15:24.334 "is_configured": true, 00:15:24.334 "data_offset": 2048, 00:15:24.334 "data_size": 63488 00:15:24.334 } 00:15:24.334 ] 00:15:24.334 }' 00:15:24.334 18:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:24.334 18:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.900 18:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:24.900 18:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:24.900 [2024-07-15 18:30:10.445391] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14d6930 00:15:25.835 18:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:26.092 18:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:26.092 18:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:15:26.092 18:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:15:26.092 18:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:26.092 18:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:26.092 18:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:26.092 18:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:26.092 18:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:26.092 18:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:26.092 18:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:26.092 18:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:26.092 18:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:26.092 18:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:26.092 18:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.092 18:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.363 18:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:26.363 "name": "raid_bdev1", 00:15:26.363 "uuid": "c1245a1b-49d9-404b-adc1-309ff0b59541", 00:15:26.363 "strip_size_kb": 64, 00:15:26.363 "state": "online", 00:15:26.363 "raid_level": "concat", 00:15:26.363 "superblock": true, 00:15:26.363 "num_base_bdevs": 3, 00:15:26.363 "num_base_bdevs_discovered": 3, 00:15:26.363 "num_base_bdevs_operational": 3, 00:15:26.363 "base_bdevs_list": [ 00:15:26.363 { 00:15:26.363 "name": "BaseBdev1", 00:15:26.363 "uuid": "9a031d1a-fae7-5820-a2c6-a00d15c58c12", 00:15:26.363 "is_configured": true, 00:15:26.363 "data_offset": 2048, 00:15:26.363 "data_size": 63488 00:15:26.363 }, 00:15:26.363 { 00:15:26.363 "name": "BaseBdev2", 00:15:26.363 "uuid": "fab88016-4a3c-54f6-a1bd-394ce29d62f3", 00:15:26.363 "is_configured": true, 00:15:26.363 "data_offset": 2048, 00:15:26.363 "data_size": 63488 00:15:26.363 }, 00:15:26.363 { 00:15:26.363 "name": "BaseBdev3", 00:15:26.363 "uuid": "b833e67b-7b15-5aa3-ab2a-805583d2af60", 00:15:26.363 "is_configured": true, 00:15:26.363 "data_offset": 2048, 00:15:26.363 "data_size": 63488 00:15:26.363 } 00:15:26.363 ] 00:15:26.363 }' 00:15:26.363 18:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:26.363 18:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.297 18:30:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:27.297 [2024-07-15 18:30:12.720124] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:27.297 [2024-07-15 18:30:12.720150] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:27.297 [2024-07-15 18:30:12.723561] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:27.297 [2024-07-15 18:30:12.723595] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.297 [2024-07-15 18:30:12.723634] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:27.297 [2024-07-15 18:30:12.723643] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14d6a00 name raid_bdev1, state offline 00:15:27.297 0 00:15:27.297 18:30:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2812349 00:15:27.297 18:30:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2812349 ']' 00:15:27.297 18:30:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2812349 00:15:27.297 18:30:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:15:27.297 18:30:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:27.297 18:30:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2812349 00:15:27.297 18:30:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:27.297 18:30:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:27.297 18:30:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2812349' 00:15:27.297 killing process with pid 2812349 00:15:27.297 18:30:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2812349 00:15:27.297 [2024-07-15 18:30:12.796738] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:27.297 18:30:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2812349 00:15:27.297 [2024-07-15 18:30:12.817145] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:27.556 18:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.dGvHuk2FMX 00:15:27.556 18:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:27.556 18:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:27.556 18:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.44 00:15:27.556 18:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:15:27.556 18:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:27.556 18:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:27.556 18:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.44 != \0\.\0\0 ]] 00:15:27.556 00:15:27.556 real 0m7.135s 00:15:27.556 user 0m11.650s 00:15:27.556 sys 0m0.977s 00:15:27.556 18:30:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:27.556 18:30:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.556 ************************************ 00:15:27.556 END TEST raid_write_error_test 00:15:27.556 ************************************ 00:15:27.556 18:30:13 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:27.556 18:30:13 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:15:27.556 18:30:13 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:15:27.556 18:30:13 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:27.556 18:30:13 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:27.556 18:30:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:27.556 ************************************ 00:15:27.556 START TEST raid_state_function_test 00:15:27.556 ************************************ 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 false 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2813897 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2813897' 00:15:27.556 Process raid pid: 2813897 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2813897 /var/tmp/spdk-raid.sock 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2813897 ']' 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:27.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:27.556 18:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.815 [2024-07-15 18:30:13.138244] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:15:27.815 [2024-07-15 18:30:13.138311] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:27.815 [2024-07-15 18:30:13.242457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.815 [2024-07-15 18:30:13.336683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.073 [2024-07-15 18:30:13.392781] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:28.073 [2024-07-15 18:30:13.392806] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:28.639 18:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:28.639 18:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:15:28.639 18:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:28.898 [2024-07-15 18:30:14.243251] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:28.898 [2024-07-15 18:30:14.243292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:28.898 [2024-07-15 18:30:14.243301] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:28.898 [2024-07-15 18:30:14.243309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:28.898 [2024-07-15 18:30:14.243319] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:28.898 [2024-07-15 18:30:14.243327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:28.898 18:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:28.898 18:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:28.898 18:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:28.898 18:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:28.898 18:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:28.898 18:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:28.898 18:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:28.898 18:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:28.898 18:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:28.898 18:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:28.898 18:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.898 18:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.157 18:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:29.157 "name": "Existed_Raid", 00:15:29.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.157 "strip_size_kb": 0, 00:15:29.157 "state": "configuring", 00:15:29.157 "raid_level": "raid1", 00:15:29.157 "superblock": false, 00:15:29.157 "num_base_bdevs": 3, 00:15:29.157 "num_base_bdevs_discovered": 0, 00:15:29.157 "num_base_bdevs_operational": 3, 00:15:29.157 "base_bdevs_list": [ 00:15:29.157 { 00:15:29.157 "name": "BaseBdev1", 00:15:29.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.157 "is_configured": false, 00:15:29.157 "data_offset": 0, 00:15:29.157 "data_size": 0 00:15:29.157 }, 00:15:29.157 { 00:15:29.157 "name": "BaseBdev2", 00:15:29.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.157 "is_configured": false, 00:15:29.157 "data_offset": 0, 00:15:29.157 "data_size": 0 00:15:29.157 }, 00:15:29.157 { 00:15:29.157 "name": "BaseBdev3", 00:15:29.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.157 "is_configured": false, 00:15:29.157 "data_offset": 0, 00:15:29.157 "data_size": 0 00:15:29.157 } 00:15:29.157 ] 00:15:29.157 }' 00:15:29.157 18:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:29.157 18:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.725 18:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:29.984 [2024-07-15 18:30:15.378145] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:29.984 [2024-07-15 18:30:15.378176] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1631ba0 name Existed_Raid, state configuring 00:15:29.984 18:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:30.243 [2024-07-15 18:30:15.646875] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:30.243 [2024-07-15 18:30:15.646900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:30.243 [2024-07-15 18:30:15.646912] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:30.243 [2024-07-15 18:30:15.646921] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:30.243 [2024-07-15 18:30:15.646928] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:30.243 [2024-07-15 18:30:15.646935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:30.243 18:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:30.504 [2024-07-15 18:30:15.924996] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:30.504 BaseBdev1 00:15:30.504 18:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:30.504 18:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:30.504 18:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:30.504 18:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:30.504 18:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:30.504 18:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:30.504 18:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:30.763 18:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:31.022 [ 00:15:31.022 { 00:15:31.022 "name": "BaseBdev1", 00:15:31.022 "aliases": [ 00:15:31.022 "c0eb576a-1ee3-4b09-b8b5-74e0593db8f4" 00:15:31.022 ], 00:15:31.022 "product_name": "Malloc disk", 00:15:31.022 "block_size": 512, 00:15:31.022 "num_blocks": 65536, 00:15:31.022 "uuid": "c0eb576a-1ee3-4b09-b8b5-74e0593db8f4", 00:15:31.022 "assigned_rate_limits": { 00:15:31.022 "rw_ios_per_sec": 0, 00:15:31.022 "rw_mbytes_per_sec": 0, 00:15:31.022 "r_mbytes_per_sec": 0, 00:15:31.022 "w_mbytes_per_sec": 0 00:15:31.022 }, 00:15:31.022 "claimed": true, 00:15:31.022 "claim_type": "exclusive_write", 00:15:31.022 "zoned": false, 00:15:31.022 "supported_io_types": { 00:15:31.022 "read": true, 00:15:31.022 "write": true, 00:15:31.022 "unmap": true, 00:15:31.022 "flush": true, 00:15:31.022 "reset": true, 00:15:31.022 "nvme_admin": false, 00:15:31.022 "nvme_io": false, 00:15:31.022 "nvme_io_md": false, 00:15:31.022 "write_zeroes": true, 00:15:31.022 "zcopy": true, 00:15:31.022 "get_zone_info": false, 00:15:31.022 "zone_management": false, 00:15:31.022 "zone_append": false, 00:15:31.022 "compare": false, 00:15:31.022 "compare_and_write": false, 00:15:31.022 "abort": true, 00:15:31.022 "seek_hole": false, 00:15:31.022 "seek_data": false, 00:15:31.022 "copy": true, 00:15:31.022 "nvme_iov_md": false 00:15:31.022 }, 00:15:31.022 "memory_domains": [ 00:15:31.022 { 00:15:31.022 "dma_device_id": "system", 00:15:31.022 "dma_device_type": 1 00:15:31.022 }, 00:15:31.022 { 00:15:31.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.022 "dma_device_type": 2 00:15:31.022 } 00:15:31.022 ], 00:15:31.022 "driver_specific": {} 00:15:31.022 } 00:15:31.022 ] 00:15:31.022 18:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:31.022 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:31.022 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:31.022 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:31.022 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:31.022 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:31.022 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:31.022 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:31.022 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:31.022 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:31.022 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:31.022 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.022 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.281 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:31.281 "name": "Existed_Raid", 00:15:31.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.281 "strip_size_kb": 0, 00:15:31.281 "state": "configuring", 00:15:31.281 "raid_level": "raid1", 00:15:31.281 "superblock": false, 00:15:31.281 "num_base_bdevs": 3, 00:15:31.281 "num_base_bdevs_discovered": 1, 00:15:31.281 "num_base_bdevs_operational": 3, 00:15:31.281 "base_bdevs_list": [ 00:15:31.281 { 00:15:31.281 "name": "BaseBdev1", 00:15:31.281 "uuid": "c0eb576a-1ee3-4b09-b8b5-74e0593db8f4", 00:15:31.281 "is_configured": true, 00:15:31.281 "data_offset": 0, 00:15:31.281 "data_size": 65536 00:15:31.281 }, 00:15:31.281 { 00:15:31.281 "name": "BaseBdev2", 00:15:31.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.281 "is_configured": false, 00:15:31.281 "data_offset": 0, 00:15:31.281 "data_size": 0 00:15:31.281 }, 00:15:31.281 { 00:15:31.281 "name": "BaseBdev3", 00:15:31.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.281 "is_configured": false, 00:15:31.281 "data_offset": 0, 00:15:31.281 "data_size": 0 00:15:31.281 } 00:15:31.281 ] 00:15:31.281 }' 00:15:31.281 18:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:31.281 18:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.848 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:32.107 [2024-07-15 18:30:17.585467] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:32.107 [2024-07-15 18:30:17.585509] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1631470 name Existed_Raid, state configuring 00:15:32.107 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:32.366 [2024-07-15 18:30:17.838170] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:32.366 [2024-07-15 18:30:17.839686] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:32.366 [2024-07-15 18:30:17.839718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:32.366 [2024-07-15 18:30:17.839729] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:32.366 [2024-07-15 18:30:17.839737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:32.366 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:32.366 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:32.366 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:32.366 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:32.366 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:32.366 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:32.366 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:32.366 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:32.366 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:32.366 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:32.366 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:32.366 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:32.366 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.366 18:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.625 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:32.625 "name": "Existed_Raid", 00:15:32.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.625 "strip_size_kb": 0, 00:15:32.625 "state": "configuring", 00:15:32.625 "raid_level": "raid1", 00:15:32.625 "superblock": false, 00:15:32.625 "num_base_bdevs": 3, 00:15:32.625 "num_base_bdevs_discovered": 1, 00:15:32.625 "num_base_bdevs_operational": 3, 00:15:32.625 "base_bdevs_list": [ 00:15:32.625 { 00:15:32.625 "name": "BaseBdev1", 00:15:32.625 "uuid": "c0eb576a-1ee3-4b09-b8b5-74e0593db8f4", 00:15:32.625 "is_configured": true, 00:15:32.625 "data_offset": 0, 00:15:32.625 "data_size": 65536 00:15:32.625 }, 00:15:32.625 { 00:15:32.625 "name": "BaseBdev2", 00:15:32.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.625 "is_configured": false, 00:15:32.625 "data_offset": 0, 00:15:32.625 "data_size": 0 00:15:32.625 }, 00:15:32.625 { 00:15:32.625 "name": "BaseBdev3", 00:15:32.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.625 "is_configured": false, 00:15:32.625 "data_offset": 0, 00:15:32.625 "data_size": 0 00:15:32.625 } 00:15:32.625 ] 00:15:32.625 }' 00:15:32.625 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:32.625 18:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.561 18:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:33.561 [2024-07-15 18:30:18.988410] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:33.561 BaseBdev2 00:15:33.561 18:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:33.561 18:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:33.561 18:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:33.561 18:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:33.561 18:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:33.561 18:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:33.561 18:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:33.819 18:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:34.078 [ 00:15:34.078 { 00:15:34.078 "name": "BaseBdev2", 00:15:34.078 "aliases": [ 00:15:34.078 "5ff8d654-97e0-4e82-98f7-9aaff3cb6812" 00:15:34.078 ], 00:15:34.078 "product_name": "Malloc disk", 00:15:34.078 "block_size": 512, 00:15:34.078 "num_blocks": 65536, 00:15:34.078 "uuid": "5ff8d654-97e0-4e82-98f7-9aaff3cb6812", 00:15:34.078 "assigned_rate_limits": { 00:15:34.078 "rw_ios_per_sec": 0, 00:15:34.078 "rw_mbytes_per_sec": 0, 00:15:34.078 "r_mbytes_per_sec": 0, 00:15:34.078 "w_mbytes_per_sec": 0 00:15:34.078 }, 00:15:34.078 "claimed": true, 00:15:34.078 "claim_type": "exclusive_write", 00:15:34.078 "zoned": false, 00:15:34.078 "supported_io_types": { 00:15:34.078 "read": true, 00:15:34.078 "write": true, 00:15:34.078 "unmap": true, 00:15:34.078 "flush": true, 00:15:34.078 "reset": true, 00:15:34.078 "nvme_admin": false, 00:15:34.078 "nvme_io": false, 00:15:34.078 "nvme_io_md": false, 00:15:34.078 "write_zeroes": true, 00:15:34.078 "zcopy": true, 00:15:34.078 "get_zone_info": false, 00:15:34.078 "zone_management": false, 00:15:34.078 "zone_append": false, 00:15:34.078 "compare": false, 00:15:34.078 "compare_and_write": false, 00:15:34.078 "abort": true, 00:15:34.078 "seek_hole": false, 00:15:34.078 "seek_data": false, 00:15:34.078 "copy": true, 00:15:34.078 "nvme_iov_md": false 00:15:34.078 }, 00:15:34.078 "memory_domains": [ 00:15:34.078 { 00:15:34.078 "dma_device_id": "system", 00:15:34.078 "dma_device_type": 1 00:15:34.078 }, 00:15:34.078 { 00:15:34.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.078 "dma_device_type": 2 00:15:34.078 } 00:15:34.078 ], 00:15:34.078 "driver_specific": {} 00:15:34.078 } 00:15:34.078 ] 00:15:34.078 18:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:34.078 18:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:34.078 18:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:34.078 18:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:34.078 18:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:34.078 18:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:34.078 18:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:34.078 18:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:34.078 18:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:34.078 18:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:34.078 18:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:34.078 18:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:34.078 18:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:34.078 18:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.078 18:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.337 18:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:34.337 "name": "Existed_Raid", 00:15:34.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.337 "strip_size_kb": 0, 00:15:34.337 "state": "configuring", 00:15:34.337 "raid_level": "raid1", 00:15:34.337 "superblock": false, 00:15:34.337 "num_base_bdevs": 3, 00:15:34.337 "num_base_bdevs_discovered": 2, 00:15:34.337 "num_base_bdevs_operational": 3, 00:15:34.337 "base_bdevs_list": [ 00:15:34.337 { 00:15:34.337 "name": "BaseBdev1", 00:15:34.337 "uuid": "c0eb576a-1ee3-4b09-b8b5-74e0593db8f4", 00:15:34.337 "is_configured": true, 00:15:34.337 "data_offset": 0, 00:15:34.337 "data_size": 65536 00:15:34.337 }, 00:15:34.337 { 00:15:34.337 "name": "BaseBdev2", 00:15:34.337 "uuid": "5ff8d654-97e0-4e82-98f7-9aaff3cb6812", 00:15:34.337 "is_configured": true, 00:15:34.337 "data_offset": 0, 00:15:34.337 "data_size": 65536 00:15:34.337 }, 00:15:34.337 { 00:15:34.337 "name": "BaseBdev3", 00:15:34.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.337 "is_configured": false, 00:15:34.337 "data_offset": 0, 00:15:34.337 "data_size": 0 00:15:34.337 } 00:15:34.337 ] 00:15:34.337 }' 00:15:34.337 18:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:34.337 18:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.904 18:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:35.163 [2024-07-15 18:30:20.652279] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:35.163 [2024-07-15 18:30:20.652318] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1632360 00:15:35.163 [2024-07-15 18:30:20.652326] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:35.163 [2024-07-15 18:30:20.652522] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17da860 00:15:35.163 [2024-07-15 18:30:20.652658] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1632360 00:15:35.163 [2024-07-15 18:30:20.652666] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1632360 00:15:35.163 [2024-07-15 18:30:20.652831] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.163 BaseBdev3 00:15:35.163 18:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:15:35.163 18:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:35.163 18:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:35.163 18:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:35.163 18:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:35.163 18:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:35.163 18:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:35.422 18:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:35.680 [ 00:15:35.680 { 00:15:35.680 "name": "BaseBdev3", 00:15:35.680 "aliases": [ 00:15:35.680 "5c15ae76-4b78-420d-ab91-28e2bacc1555" 00:15:35.680 ], 00:15:35.680 "product_name": "Malloc disk", 00:15:35.680 "block_size": 512, 00:15:35.680 "num_blocks": 65536, 00:15:35.680 "uuid": "5c15ae76-4b78-420d-ab91-28e2bacc1555", 00:15:35.680 "assigned_rate_limits": { 00:15:35.680 "rw_ios_per_sec": 0, 00:15:35.680 "rw_mbytes_per_sec": 0, 00:15:35.680 "r_mbytes_per_sec": 0, 00:15:35.680 "w_mbytes_per_sec": 0 00:15:35.680 }, 00:15:35.680 "claimed": true, 00:15:35.680 "claim_type": "exclusive_write", 00:15:35.680 "zoned": false, 00:15:35.680 "supported_io_types": { 00:15:35.680 "read": true, 00:15:35.680 "write": true, 00:15:35.680 "unmap": true, 00:15:35.680 "flush": true, 00:15:35.680 "reset": true, 00:15:35.680 "nvme_admin": false, 00:15:35.680 "nvme_io": false, 00:15:35.680 "nvme_io_md": false, 00:15:35.680 "write_zeroes": true, 00:15:35.680 "zcopy": true, 00:15:35.680 "get_zone_info": false, 00:15:35.680 "zone_management": false, 00:15:35.680 "zone_append": false, 00:15:35.680 "compare": false, 00:15:35.680 "compare_and_write": false, 00:15:35.680 "abort": true, 00:15:35.680 "seek_hole": false, 00:15:35.680 "seek_data": false, 00:15:35.680 "copy": true, 00:15:35.680 "nvme_iov_md": false 00:15:35.680 }, 00:15:35.680 "memory_domains": [ 00:15:35.680 { 00:15:35.680 "dma_device_id": "system", 00:15:35.680 "dma_device_type": 1 00:15:35.680 }, 00:15:35.680 { 00:15:35.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.680 "dma_device_type": 2 00:15:35.680 } 00:15:35.680 ], 00:15:35.680 "driver_specific": {} 00:15:35.680 } 00:15:35.680 ] 00:15:35.680 18:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:35.680 18:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:35.680 18:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:35.681 18:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:35.681 18:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:35.681 18:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:35.681 18:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:35.681 18:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:35.681 18:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:35.681 18:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:35.681 18:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:35.681 18:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:35.681 18:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:35.681 18:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.681 18:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.940 18:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:35.940 "name": "Existed_Raid", 00:15:35.940 "uuid": "bae38190-da39-437b-87a1-819fd7050336", 00:15:35.940 "strip_size_kb": 0, 00:15:35.940 "state": "online", 00:15:35.940 "raid_level": "raid1", 00:15:35.940 "superblock": false, 00:15:35.940 "num_base_bdevs": 3, 00:15:35.940 "num_base_bdevs_discovered": 3, 00:15:35.940 "num_base_bdevs_operational": 3, 00:15:35.940 "base_bdevs_list": [ 00:15:35.940 { 00:15:35.940 "name": "BaseBdev1", 00:15:35.940 "uuid": "c0eb576a-1ee3-4b09-b8b5-74e0593db8f4", 00:15:35.940 "is_configured": true, 00:15:35.940 "data_offset": 0, 00:15:35.940 "data_size": 65536 00:15:35.940 }, 00:15:35.940 { 00:15:35.940 "name": "BaseBdev2", 00:15:35.940 "uuid": "5ff8d654-97e0-4e82-98f7-9aaff3cb6812", 00:15:35.940 "is_configured": true, 00:15:35.940 "data_offset": 0, 00:15:35.940 "data_size": 65536 00:15:35.940 }, 00:15:35.940 { 00:15:35.940 "name": "BaseBdev3", 00:15:35.940 "uuid": "5c15ae76-4b78-420d-ab91-28e2bacc1555", 00:15:35.940 "is_configured": true, 00:15:35.940 "data_offset": 0, 00:15:35.940 "data_size": 65536 00:15:35.940 } 00:15:35.940 ] 00:15:35.940 }' 00:15:35.940 18:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:35.940 18:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.875 18:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:36.875 18:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:36.875 18:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:36.875 18:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:36.875 18:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:36.875 18:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:36.875 18:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:36.875 18:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:36.875 [2024-07-15 18:30:22.224850] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:36.876 18:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:36.876 "name": "Existed_Raid", 00:15:36.876 "aliases": [ 00:15:36.876 "bae38190-da39-437b-87a1-819fd7050336" 00:15:36.876 ], 00:15:36.876 "product_name": "Raid Volume", 00:15:36.876 "block_size": 512, 00:15:36.876 "num_blocks": 65536, 00:15:36.876 "uuid": "bae38190-da39-437b-87a1-819fd7050336", 00:15:36.876 "assigned_rate_limits": { 00:15:36.876 "rw_ios_per_sec": 0, 00:15:36.876 "rw_mbytes_per_sec": 0, 00:15:36.876 "r_mbytes_per_sec": 0, 00:15:36.876 "w_mbytes_per_sec": 0 00:15:36.876 }, 00:15:36.876 "claimed": false, 00:15:36.876 "zoned": false, 00:15:36.876 "supported_io_types": { 00:15:36.876 "read": true, 00:15:36.876 "write": true, 00:15:36.876 "unmap": false, 00:15:36.876 "flush": false, 00:15:36.876 "reset": true, 00:15:36.876 "nvme_admin": false, 00:15:36.876 "nvme_io": false, 00:15:36.876 "nvme_io_md": false, 00:15:36.876 "write_zeroes": true, 00:15:36.876 "zcopy": false, 00:15:36.876 "get_zone_info": false, 00:15:36.876 "zone_management": false, 00:15:36.876 "zone_append": false, 00:15:36.876 "compare": false, 00:15:36.876 "compare_and_write": false, 00:15:36.876 "abort": false, 00:15:36.876 "seek_hole": false, 00:15:36.876 "seek_data": false, 00:15:36.876 "copy": false, 00:15:36.876 "nvme_iov_md": false 00:15:36.876 }, 00:15:36.876 "memory_domains": [ 00:15:36.876 { 00:15:36.876 "dma_device_id": "system", 00:15:36.876 "dma_device_type": 1 00:15:36.876 }, 00:15:36.876 { 00:15:36.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.876 "dma_device_type": 2 00:15:36.876 }, 00:15:36.876 { 00:15:36.876 "dma_device_id": "system", 00:15:36.876 "dma_device_type": 1 00:15:36.876 }, 00:15:36.876 { 00:15:36.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.876 "dma_device_type": 2 00:15:36.876 }, 00:15:36.876 { 00:15:36.876 "dma_device_id": "system", 00:15:36.876 "dma_device_type": 1 00:15:36.876 }, 00:15:36.876 { 00:15:36.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.876 "dma_device_type": 2 00:15:36.876 } 00:15:36.876 ], 00:15:36.876 "driver_specific": { 00:15:36.876 "raid": { 00:15:36.876 "uuid": "bae38190-da39-437b-87a1-819fd7050336", 00:15:36.876 "strip_size_kb": 0, 00:15:36.876 "state": "online", 00:15:36.876 "raid_level": "raid1", 00:15:36.876 "superblock": false, 00:15:36.876 "num_base_bdevs": 3, 00:15:36.876 "num_base_bdevs_discovered": 3, 00:15:36.876 "num_base_bdevs_operational": 3, 00:15:36.876 "base_bdevs_list": [ 00:15:36.876 { 00:15:36.876 "name": "BaseBdev1", 00:15:36.876 "uuid": "c0eb576a-1ee3-4b09-b8b5-74e0593db8f4", 00:15:36.876 "is_configured": true, 00:15:36.876 "data_offset": 0, 00:15:36.876 "data_size": 65536 00:15:36.876 }, 00:15:36.876 { 00:15:36.876 "name": "BaseBdev2", 00:15:36.876 "uuid": "5ff8d654-97e0-4e82-98f7-9aaff3cb6812", 00:15:36.876 "is_configured": true, 00:15:36.876 "data_offset": 0, 00:15:36.876 "data_size": 65536 00:15:36.876 }, 00:15:36.876 { 00:15:36.876 "name": "BaseBdev3", 00:15:36.876 "uuid": "5c15ae76-4b78-420d-ab91-28e2bacc1555", 00:15:36.876 "is_configured": true, 00:15:36.876 "data_offset": 0, 00:15:36.876 "data_size": 65536 00:15:36.876 } 00:15:36.876 ] 00:15:36.876 } 00:15:36.876 } 00:15:36.876 }' 00:15:36.876 18:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:36.876 18:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:36.876 BaseBdev2 00:15:36.876 BaseBdev3' 00:15:36.876 18:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:36.876 18:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:36.876 18:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:37.136 18:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:37.136 "name": "BaseBdev1", 00:15:37.136 "aliases": [ 00:15:37.136 "c0eb576a-1ee3-4b09-b8b5-74e0593db8f4" 00:15:37.136 ], 00:15:37.136 "product_name": "Malloc disk", 00:15:37.136 "block_size": 512, 00:15:37.136 "num_blocks": 65536, 00:15:37.136 "uuid": "c0eb576a-1ee3-4b09-b8b5-74e0593db8f4", 00:15:37.136 "assigned_rate_limits": { 00:15:37.136 "rw_ios_per_sec": 0, 00:15:37.136 "rw_mbytes_per_sec": 0, 00:15:37.136 "r_mbytes_per_sec": 0, 00:15:37.136 "w_mbytes_per_sec": 0 00:15:37.136 }, 00:15:37.136 "claimed": true, 00:15:37.136 "claim_type": "exclusive_write", 00:15:37.136 "zoned": false, 00:15:37.136 "supported_io_types": { 00:15:37.136 "read": true, 00:15:37.136 "write": true, 00:15:37.136 "unmap": true, 00:15:37.136 "flush": true, 00:15:37.136 "reset": true, 00:15:37.136 "nvme_admin": false, 00:15:37.136 "nvme_io": false, 00:15:37.136 "nvme_io_md": false, 00:15:37.136 "write_zeroes": true, 00:15:37.136 "zcopy": true, 00:15:37.136 "get_zone_info": false, 00:15:37.136 "zone_management": false, 00:15:37.136 "zone_append": false, 00:15:37.136 "compare": false, 00:15:37.136 "compare_and_write": false, 00:15:37.136 "abort": true, 00:15:37.136 "seek_hole": false, 00:15:37.136 "seek_data": false, 00:15:37.136 "copy": true, 00:15:37.136 "nvme_iov_md": false 00:15:37.136 }, 00:15:37.136 "memory_domains": [ 00:15:37.136 { 00:15:37.136 "dma_device_id": "system", 00:15:37.136 "dma_device_type": 1 00:15:37.136 }, 00:15:37.136 { 00:15:37.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.136 "dma_device_type": 2 00:15:37.136 } 00:15:37.136 ], 00:15:37.136 "driver_specific": {} 00:15:37.136 }' 00:15:37.136 18:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:37.136 18:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:37.136 18:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:37.136 18:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:37.395 18:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:37.395 18:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:37.395 18:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:37.395 18:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:37.395 18:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:37.395 18:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:37.395 18:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:37.395 18:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:37.395 18:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:37.395 18:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:37.395 18:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:37.654 18:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:37.654 "name": "BaseBdev2", 00:15:37.654 "aliases": [ 00:15:37.654 "5ff8d654-97e0-4e82-98f7-9aaff3cb6812" 00:15:37.654 ], 00:15:37.654 "product_name": "Malloc disk", 00:15:37.654 "block_size": 512, 00:15:37.654 "num_blocks": 65536, 00:15:37.654 "uuid": "5ff8d654-97e0-4e82-98f7-9aaff3cb6812", 00:15:37.654 "assigned_rate_limits": { 00:15:37.654 "rw_ios_per_sec": 0, 00:15:37.654 "rw_mbytes_per_sec": 0, 00:15:37.654 "r_mbytes_per_sec": 0, 00:15:37.654 "w_mbytes_per_sec": 0 00:15:37.654 }, 00:15:37.654 "claimed": true, 00:15:37.654 "claim_type": "exclusive_write", 00:15:37.654 "zoned": false, 00:15:37.654 "supported_io_types": { 00:15:37.654 "read": true, 00:15:37.654 "write": true, 00:15:37.654 "unmap": true, 00:15:37.654 "flush": true, 00:15:37.654 "reset": true, 00:15:37.654 "nvme_admin": false, 00:15:37.654 "nvme_io": false, 00:15:37.654 "nvme_io_md": false, 00:15:37.654 "write_zeroes": true, 00:15:37.654 "zcopy": true, 00:15:37.654 "get_zone_info": false, 00:15:37.654 "zone_management": false, 00:15:37.654 "zone_append": false, 00:15:37.654 "compare": false, 00:15:37.654 "compare_and_write": false, 00:15:37.654 "abort": true, 00:15:37.654 "seek_hole": false, 00:15:37.654 "seek_data": false, 00:15:37.654 "copy": true, 00:15:37.654 "nvme_iov_md": false 00:15:37.654 }, 00:15:37.654 "memory_domains": [ 00:15:37.654 { 00:15:37.654 "dma_device_id": "system", 00:15:37.654 "dma_device_type": 1 00:15:37.654 }, 00:15:37.654 { 00:15:37.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.654 "dma_device_type": 2 00:15:37.654 } 00:15:37.654 ], 00:15:37.654 "driver_specific": {} 00:15:37.654 }' 00:15:37.654 18:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:37.924 18:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:37.924 18:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:37.924 18:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:37.924 18:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:37.924 18:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:37.924 18:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:37.924 18:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:37.924 18:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:37.924 18:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:38.185 18:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:38.185 18:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:38.185 18:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:38.185 18:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:38.185 18:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:38.443 18:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:38.443 "name": "BaseBdev3", 00:15:38.443 "aliases": [ 00:15:38.443 "5c15ae76-4b78-420d-ab91-28e2bacc1555" 00:15:38.443 ], 00:15:38.443 "product_name": "Malloc disk", 00:15:38.443 "block_size": 512, 00:15:38.443 "num_blocks": 65536, 00:15:38.443 "uuid": "5c15ae76-4b78-420d-ab91-28e2bacc1555", 00:15:38.443 "assigned_rate_limits": { 00:15:38.443 "rw_ios_per_sec": 0, 00:15:38.443 "rw_mbytes_per_sec": 0, 00:15:38.443 "r_mbytes_per_sec": 0, 00:15:38.443 "w_mbytes_per_sec": 0 00:15:38.443 }, 00:15:38.443 "claimed": true, 00:15:38.443 "claim_type": "exclusive_write", 00:15:38.443 "zoned": false, 00:15:38.443 "supported_io_types": { 00:15:38.443 "read": true, 00:15:38.443 "write": true, 00:15:38.443 "unmap": true, 00:15:38.443 "flush": true, 00:15:38.443 "reset": true, 00:15:38.443 "nvme_admin": false, 00:15:38.443 "nvme_io": false, 00:15:38.443 "nvme_io_md": false, 00:15:38.443 "write_zeroes": true, 00:15:38.443 "zcopy": true, 00:15:38.443 "get_zone_info": false, 00:15:38.443 "zone_management": false, 00:15:38.443 "zone_append": false, 00:15:38.443 "compare": false, 00:15:38.443 "compare_and_write": false, 00:15:38.443 "abort": true, 00:15:38.443 "seek_hole": false, 00:15:38.443 "seek_data": false, 00:15:38.443 "copy": true, 00:15:38.443 "nvme_iov_md": false 00:15:38.443 }, 00:15:38.443 "memory_domains": [ 00:15:38.443 { 00:15:38.443 "dma_device_id": "system", 00:15:38.443 "dma_device_type": 1 00:15:38.443 }, 00:15:38.443 { 00:15:38.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.443 "dma_device_type": 2 00:15:38.443 } 00:15:38.443 ], 00:15:38.443 "driver_specific": {} 00:15:38.443 }' 00:15:38.443 18:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:38.443 18:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:38.443 18:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:38.443 18:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:38.443 18:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:38.781 18:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:38.781 18:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:38.781 18:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:38.782 18:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:38.782 18:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:38.782 18:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:38.782 18:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:38.782 18:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:39.364 [2024-07-15 18:30:24.647089] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:39.364 18:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:39.364 18:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:15:39.364 18:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:39.364 18:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:15:39.364 18:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:15:39.364 18:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:39.364 18:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:39.364 18:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:39.364 18:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:39.364 18:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:39.364 18:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:39.364 18:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:39.364 18:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:39.364 18:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:39.364 18:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:39.364 18:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.364 18:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.624 18:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:39.624 "name": "Existed_Raid", 00:15:39.624 "uuid": "bae38190-da39-437b-87a1-819fd7050336", 00:15:39.624 "strip_size_kb": 0, 00:15:39.624 "state": "online", 00:15:39.624 "raid_level": "raid1", 00:15:39.624 "superblock": false, 00:15:39.624 "num_base_bdevs": 3, 00:15:39.624 "num_base_bdevs_discovered": 2, 00:15:39.624 "num_base_bdevs_operational": 2, 00:15:39.624 "base_bdevs_list": [ 00:15:39.624 { 00:15:39.624 "name": null, 00:15:39.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.624 "is_configured": false, 00:15:39.624 "data_offset": 0, 00:15:39.624 "data_size": 65536 00:15:39.624 }, 00:15:39.624 { 00:15:39.624 "name": "BaseBdev2", 00:15:39.624 "uuid": "5ff8d654-97e0-4e82-98f7-9aaff3cb6812", 00:15:39.624 "is_configured": true, 00:15:39.624 "data_offset": 0, 00:15:39.624 "data_size": 65536 00:15:39.624 }, 00:15:39.624 { 00:15:39.624 "name": "BaseBdev3", 00:15:39.624 "uuid": "5c15ae76-4b78-420d-ab91-28e2bacc1555", 00:15:39.624 "is_configured": true, 00:15:39.624 "data_offset": 0, 00:15:39.624 "data_size": 65536 00:15:39.624 } 00:15:39.624 ] 00:15:39.624 }' 00:15:39.624 18:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:39.624 18:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.192 18:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:40.192 18:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:40.192 18:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.192 18:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:40.452 18:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:40.452 18:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:40.452 18:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:41.021 [2024-07-15 18:30:26.272534] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:41.021 18:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:41.021 18:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:41.021 18:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.021 18:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:41.021 18:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:41.021 18:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:41.021 18:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:41.280 [2024-07-15 18:30:26.804726] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:41.280 [2024-07-15 18:30:26.804804] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:41.280 [2024-07-15 18:30:26.815521] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:41.280 [2024-07-15 18:30:26.815554] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:41.280 [2024-07-15 18:30:26.815563] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1632360 name Existed_Raid, state offline 00:15:41.539 18:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:41.539 18:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:41.539 18:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.539 18:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:41.798 18:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:41.798 18:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:41.798 18:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:15:41.798 18:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:15:41.798 18:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:41.798 18:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:41.798 BaseBdev2 00:15:41.798 18:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:15:41.798 18:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:41.798 18:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:41.798 18:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:41.798 18:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:42.057 18:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:42.057 18:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:42.057 18:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:42.317 [ 00:15:42.317 { 00:15:42.317 "name": "BaseBdev2", 00:15:42.317 "aliases": [ 00:15:42.317 "4b92e097-2ebc-4b93-8653-22f8445477cd" 00:15:42.317 ], 00:15:42.317 "product_name": "Malloc disk", 00:15:42.317 "block_size": 512, 00:15:42.317 "num_blocks": 65536, 00:15:42.317 "uuid": "4b92e097-2ebc-4b93-8653-22f8445477cd", 00:15:42.317 "assigned_rate_limits": { 00:15:42.317 "rw_ios_per_sec": 0, 00:15:42.317 "rw_mbytes_per_sec": 0, 00:15:42.317 "r_mbytes_per_sec": 0, 00:15:42.317 "w_mbytes_per_sec": 0 00:15:42.317 }, 00:15:42.317 "claimed": false, 00:15:42.317 "zoned": false, 00:15:42.317 "supported_io_types": { 00:15:42.317 "read": true, 00:15:42.317 "write": true, 00:15:42.317 "unmap": true, 00:15:42.317 "flush": true, 00:15:42.317 "reset": true, 00:15:42.317 "nvme_admin": false, 00:15:42.317 "nvme_io": false, 00:15:42.317 "nvme_io_md": false, 00:15:42.317 "write_zeroes": true, 00:15:42.317 "zcopy": true, 00:15:42.317 "get_zone_info": false, 00:15:42.317 "zone_management": false, 00:15:42.317 "zone_append": false, 00:15:42.317 "compare": false, 00:15:42.317 "compare_and_write": false, 00:15:42.317 "abort": true, 00:15:42.317 "seek_hole": false, 00:15:42.317 "seek_data": false, 00:15:42.317 "copy": true, 00:15:42.317 "nvme_iov_md": false 00:15:42.317 }, 00:15:42.317 "memory_domains": [ 00:15:42.317 { 00:15:42.317 "dma_device_id": "system", 00:15:42.317 "dma_device_type": 1 00:15:42.317 }, 00:15:42.317 { 00:15:42.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.317 "dma_device_type": 2 00:15:42.317 } 00:15:42.317 ], 00:15:42.317 "driver_specific": {} 00:15:42.317 } 00:15:42.317 ] 00:15:42.317 18:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:42.317 18:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:42.317 18:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:42.317 18:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:42.576 BaseBdev3 00:15:42.576 18:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:15:42.576 18:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:42.576 18:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:42.576 18:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:42.576 18:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:42.576 18:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:42.576 18:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:42.835 18:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:43.094 [ 00:15:43.094 { 00:15:43.094 "name": "BaseBdev3", 00:15:43.094 "aliases": [ 00:15:43.094 "cfd0fc18-d0c2-4d7b-b09e-6ef12c9efea0" 00:15:43.094 ], 00:15:43.094 "product_name": "Malloc disk", 00:15:43.094 "block_size": 512, 00:15:43.094 "num_blocks": 65536, 00:15:43.094 "uuid": "cfd0fc18-d0c2-4d7b-b09e-6ef12c9efea0", 00:15:43.094 "assigned_rate_limits": { 00:15:43.094 "rw_ios_per_sec": 0, 00:15:43.094 "rw_mbytes_per_sec": 0, 00:15:43.094 "r_mbytes_per_sec": 0, 00:15:43.094 "w_mbytes_per_sec": 0 00:15:43.094 }, 00:15:43.094 "claimed": false, 00:15:43.094 "zoned": false, 00:15:43.094 "supported_io_types": { 00:15:43.094 "read": true, 00:15:43.094 "write": true, 00:15:43.094 "unmap": true, 00:15:43.094 "flush": true, 00:15:43.094 "reset": true, 00:15:43.094 "nvme_admin": false, 00:15:43.094 "nvme_io": false, 00:15:43.094 "nvme_io_md": false, 00:15:43.094 "write_zeroes": true, 00:15:43.094 "zcopy": true, 00:15:43.094 "get_zone_info": false, 00:15:43.094 "zone_management": false, 00:15:43.094 "zone_append": false, 00:15:43.094 "compare": false, 00:15:43.094 "compare_and_write": false, 00:15:43.094 "abort": true, 00:15:43.094 "seek_hole": false, 00:15:43.094 "seek_data": false, 00:15:43.094 "copy": true, 00:15:43.094 "nvme_iov_md": false 00:15:43.094 }, 00:15:43.094 "memory_domains": [ 00:15:43.094 { 00:15:43.094 "dma_device_id": "system", 00:15:43.094 "dma_device_type": 1 00:15:43.094 }, 00:15:43.094 { 00:15:43.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.094 "dma_device_type": 2 00:15:43.094 } 00:15:43.094 ], 00:15:43.094 "driver_specific": {} 00:15:43.094 } 00:15:43.094 ] 00:15:43.094 18:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:43.094 18:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:43.094 18:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:43.094 18:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:43.354 [2024-07-15 18:30:28.849384] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:43.354 [2024-07-15 18:30:28.849424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:43.354 [2024-07-15 18:30:28.849443] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:43.354 [2024-07-15 18:30:28.850839] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:43.354 18:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:43.354 18:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:43.354 18:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:43.354 18:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:43.354 18:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:43.354 18:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:43.354 18:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:43.354 18:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:43.354 18:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:43.354 18:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:43.354 18:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.354 18:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.613 18:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:43.613 "name": "Existed_Raid", 00:15:43.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.613 "strip_size_kb": 0, 00:15:43.613 "state": "configuring", 00:15:43.613 "raid_level": "raid1", 00:15:43.613 "superblock": false, 00:15:43.613 "num_base_bdevs": 3, 00:15:43.613 "num_base_bdevs_discovered": 2, 00:15:43.613 "num_base_bdevs_operational": 3, 00:15:43.613 "base_bdevs_list": [ 00:15:43.613 { 00:15:43.613 "name": "BaseBdev1", 00:15:43.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.613 "is_configured": false, 00:15:43.613 "data_offset": 0, 00:15:43.613 "data_size": 0 00:15:43.613 }, 00:15:43.613 { 00:15:43.613 "name": "BaseBdev2", 00:15:43.613 "uuid": "4b92e097-2ebc-4b93-8653-22f8445477cd", 00:15:43.613 "is_configured": true, 00:15:43.613 "data_offset": 0, 00:15:43.613 "data_size": 65536 00:15:43.613 }, 00:15:43.613 { 00:15:43.613 "name": "BaseBdev3", 00:15:43.613 "uuid": "cfd0fc18-d0c2-4d7b-b09e-6ef12c9efea0", 00:15:43.613 "is_configured": true, 00:15:43.613 "data_offset": 0, 00:15:43.613 "data_size": 65536 00:15:43.613 } 00:15:43.613 ] 00:15:43.613 }' 00:15:43.613 18:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:43.613 18:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.549 18:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:44.549 [2024-07-15 18:30:29.980421] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:44.549 18:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:44.549 18:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:44.549 18:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:44.549 18:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:44.549 18:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:44.549 18:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:44.549 18:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:44.549 18:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:44.549 18:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:44.549 18:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:44.549 18:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.549 18:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.808 18:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:44.808 "name": "Existed_Raid", 00:15:44.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.808 "strip_size_kb": 0, 00:15:44.808 "state": "configuring", 00:15:44.808 "raid_level": "raid1", 00:15:44.808 "superblock": false, 00:15:44.808 "num_base_bdevs": 3, 00:15:44.808 "num_base_bdevs_discovered": 1, 00:15:44.808 "num_base_bdevs_operational": 3, 00:15:44.808 "base_bdevs_list": [ 00:15:44.808 { 00:15:44.808 "name": "BaseBdev1", 00:15:44.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.808 "is_configured": false, 00:15:44.808 "data_offset": 0, 00:15:44.808 "data_size": 0 00:15:44.808 }, 00:15:44.808 { 00:15:44.808 "name": null, 00:15:44.808 "uuid": "4b92e097-2ebc-4b93-8653-22f8445477cd", 00:15:44.808 "is_configured": false, 00:15:44.808 "data_offset": 0, 00:15:44.808 "data_size": 65536 00:15:44.808 }, 00:15:44.808 { 00:15:44.808 "name": "BaseBdev3", 00:15:44.808 "uuid": "cfd0fc18-d0c2-4d7b-b09e-6ef12c9efea0", 00:15:44.808 "is_configured": true, 00:15:44.808 "data_offset": 0, 00:15:44.808 "data_size": 65536 00:15:44.808 } 00:15:44.808 ] 00:15:44.808 }' 00:15:44.808 18:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:44.808 18:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.376 18:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.376 18:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:45.635 18:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:15:45.635 18:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:45.921 [2024-07-15 18:30:31.351509] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:45.921 BaseBdev1 00:15:45.921 18:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:15:45.921 18:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:45.921 18:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:45.921 18:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:45.921 18:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:45.921 18:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:45.921 18:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:46.179 18:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:46.437 [ 00:15:46.437 { 00:15:46.437 "name": "BaseBdev1", 00:15:46.437 "aliases": [ 00:15:46.437 "003f5289-6b5d-4ad9-987e-7d1343d50a78" 00:15:46.437 ], 00:15:46.437 "product_name": "Malloc disk", 00:15:46.437 "block_size": 512, 00:15:46.437 "num_blocks": 65536, 00:15:46.437 "uuid": "003f5289-6b5d-4ad9-987e-7d1343d50a78", 00:15:46.437 "assigned_rate_limits": { 00:15:46.437 "rw_ios_per_sec": 0, 00:15:46.437 "rw_mbytes_per_sec": 0, 00:15:46.437 "r_mbytes_per_sec": 0, 00:15:46.437 "w_mbytes_per_sec": 0 00:15:46.437 }, 00:15:46.437 "claimed": true, 00:15:46.437 "claim_type": "exclusive_write", 00:15:46.437 "zoned": false, 00:15:46.437 "supported_io_types": { 00:15:46.437 "read": true, 00:15:46.437 "write": true, 00:15:46.437 "unmap": true, 00:15:46.437 "flush": true, 00:15:46.437 "reset": true, 00:15:46.437 "nvme_admin": false, 00:15:46.437 "nvme_io": false, 00:15:46.437 "nvme_io_md": false, 00:15:46.437 "write_zeroes": true, 00:15:46.437 "zcopy": true, 00:15:46.437 "get_zone_info": false, 00:15:46.437 "zone_management": false, 00:15:46.437 "zone_append": false, 00:15:46.437 "compare": false, 00:15:46.437 "compare_and_write": false, 00:15:46.437 "abort": true, 00:15:46.437 "seek_hole": false, 00:15:46.437 "seek_data": false, 00:15:46.437 "copy": true, 00:15:46.437 "nvme_iov_md": false 00:15:46.437 }, 00:15:46.437 "memory_domains": [ 00:15:46.437 { 00:15:46.437 "dma_device_id": "system", 00:15:46.437 "dma_device_type": 1 00:15:46.437 }, 00:15:46.437 { 00:15:46.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.437 "dma_device_type": 2 00:15:46.437 } 00:15:46.437 ], 00:15:46.437 "driver_specific": {} 00:15:46.437 } 00:15:46.437 ] 00:15:46.437 18:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:46.437 18:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:46.437 18:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:46.437 18:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:46.437 18:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:46.437 18:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:46.437 18:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:46.437 18:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:46.437 18:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:46.437 18:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:46.437 18:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:46.437 18:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.437 18:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.695 18:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:46.695 "name": "Existed_Raid", 00:15:46.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.695 "strip_size_kb": 0, 00:15:46.695 "state": "configuring", 00:15:46.695 "raid_level": "raid1", 00:15:46.695 "superblock": false, 00:15:46.695 "num_base_bdevs": 3, 00:15:46.695 "num_base_bdevs_discovered": 2, 00:15:46.695 "num_base_bdevs_operational": 3, 00:15:46.695 "base_bdevs_list": [ 00:15:46.695 { 00:15:46.695 "name": "BaseBdev1", 00:15:46.695 "uuid": "003f5289-6b5d-4ad9-987e-7d1343d50a78", 00:15:46.695 "is_configured": true, 00:15:46.695 "data_offset": 0, 00:15:46.695 "data_size": 65536 00:15:46.695 }, 00:15:46.695 { 00:15:46.695 "name": null, 00:15:46.695 "uuid": "4b92e097-2ebc-4b93-8653-22f8445477cd", 00:15:46.695 "is_configured": false, 00:15:46.695 "data_offset": 0, 00:15:46.695 "data_size": 65536 00:15:46.695 }, 00:15:46.695 { 00:15:46.695 "name": "BaseBdev3", 00:15:46.695 "uuid": "cfd0fc18-d0c2-4d7b-b09e-6ef12c9efea0", 00:15:46.695 "is_configured": true, 00:15:46.695 "data_offset": 0, 00:15:46.695 "data_size": 65536 00:15:46.695 } 00:15:46.695 ] 00:15:46.695 }' 00:15:46.695 18:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:46.695 18:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.260 18:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.260 18:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:47.517 18:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:15:47.517 18:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:47.837 [2024-07-15 18:30:33.268696] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:47.837 18:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:47.837 18:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:47.837 18:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:47.837 18:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:47.837 18:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:47.837 18:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:47.837 18:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:47.837 18:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:47.837 18:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:47.837 18:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:47.837 18:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.837 18:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.099 18:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:48.099 "name": "Existed_Raid", 00:15:48.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.099 "strip_size_kb": 0, 00:15:48.099 "state": "configuring", 00:15:48.099 "raid_level": "raid1", 00:15:48.099 "superblock": false, 00:15:48.099 "num_base_bdevs": 3, 00:15:48.099 "num_base_bdevs_discovered": 1, 00:15:48.099 "num_base_bdevs_operational": 3, 00:15:48.099 "base_bdevs_list": [ 00:15:48.099 { 00:15:48.099 "name": "BaseBdev1", 00:15:48.099 "uuid": "003f5289-6b5d-4ad9-987e-7d1343d50a78", 00:15:48.099 "is_configured": true, 00:15:48.099 "data_offset": 0, 00:15:48.099 "data_size": 65536 00:15:48.099 }, 00:15:48.099 { 00:15:48.099 "name": null, 00:15:48.099 "uuid": "4b92e097-2ebc-4b93-8653-22f8445477cd", 00:15:48.099 "is_configured": false, 00:15:48.099 "data_offset": 0, 00:15:48.099 "data_size": 65536 00:15:48.099 }, 00:15:48.099 { 00:15:48.099 "name": null, 00:15:48.099 "uuid": "cfd0fc18-d0c2-4d7b-b09e-6ef12c9efea0", 00:15:48.099 "is_configured": false, 00:15:48.099 "data_offset": 0, 00:15:48.099 "data_size": 65536 00:15:48.099 } 00:15:48.099 ] 00:15:48.099 }' 00:15:48.099 18:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:48.099 18:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.664 18:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.664 18:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:48.922 18:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:15:48.922 18:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:49.180 [2024-07-15 18:30:34.676508] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:49.180 18:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:49.180 18:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:49.180 18:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:49.180 18:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:49.180 18:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:49.180 18:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:49.180 18:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:49.180 18:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:49.180 18:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:49.180 18:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:49.180 18:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.180 18:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.438 18:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:49.438 "name": "Existed_Raid", 00:15:49.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.438 "strip_size_kb": 0, 00:15:49.438 "state": "configuring", 00:15:49.438 "raid_level": "raid1", 00:15:49.438 "superblock": false, 00:15:49.438 "num_base_bdevs": 3, 00:15:49.438 "num_base_bdevs_discovered": 2, 00:15:49.438 "num_base_bdevs_operational": 3, 00:15:49.438 "base_bdevs_list": [ 00:15:49.438 { 00:15:49.438 "name": "BaseBdev1", 00:15:49.438 "uuid": "003f5289-6b5d-4ad9-987e-7d1343d50a78", 00:15:49.438 "is_configured": true, 00:15:49.438 "data_offset": 0, 00:15:49.438 "data_size": 65536 00:15:49.438 }, 00:15:49.438 { 00:15:49.438 "name": null, 00:15:49.438 "uuid": "4b92e097-2ebc-4b93-8653-22f8445477cd", 00:15:49.438 "is_configured": false, 00:15:49.438 "data_offset": 0, 00:15:49.438 "data_size": 65536 00:15:49.438 }, 00:15:49.438 { 00:15:49.438 "name": "BaseBdev3", 00:15:49.438 "uuid": "cfd0fc18-d0c2-4d7b-b09e-6ef12c9efea0", 00:15:49.438 "is_configured": true, 00:15:49.438 "data_offset": 0, 00:15:49.438 "data_size": 65536 00:15:49.438 } 00:15:49.438 ] 00:15:49.438 }' 00:15:49.438 18:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:49.438 18:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.373 18:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.373 18:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:50.373 18:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:15:50.373 18:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:50.632 [2024-07-15 18:30:36.024161] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:50.632 18:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:50.632 18:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:50.632 18:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:50.632 18:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:50.632 18:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:50.632 18:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:50.632 18:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:50.632 18:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:50.632 18:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:50.632 18:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:50.632 18:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.632 18:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.890 18:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:50.890 "name": "Existed_Raid", 00:15:50.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.890 "strip_size_kb": 0, 00:15:50.890 "state": "configuring", 00:15:50.890 "raid_level": "raid1", 00:15:50.890 "superblock": false, 00:15:50.890 "num_base_bdevs": 3, 00:15:50.890 "num_base_bdevs_discovered": 1, 00:15:50.890 "num_base_bdevs_operational": 3, 00:15:50.890 "base_bdevs_list": [ 00:15:50.890 { 00:15:50.890 "name": null, 00:15:50.890 "uuid": "003f5289-6b5d-4ad9-987e-7d1343d50a78", 00:15:50.890 "is_configured": false, 00:15:50.890 "data_offset": 0, 00:15:50.890 "data_size": 65536 00:15:50.890 }, 00:15:50.890 { 00:15:50.890 "name": null, 00:15:50.890 "uuid": "4b92e097-2ebc-4b93-8653-22f8445477cd", 00:15:50.890 "is_configured": false, 00:15:50.890 "data_offset": 0, 00:15:50.890 "data_size": 65536 00:15:50.890 }, 00:15:50.890 { 00:15:50.890 "name": "BaseBdev3", 00:15:50.890 "uuid": "cfd0fc18-d0c2-4d7b-b09e-6ef12c9efea0", 00:15:50.890 "is_configured": true, 00:15:50.890 "data_offset": 0, 00:15:50.890 "data_size": 65536 00:15:50.890 } 00:15:50.890 ] 00:15:50.890 }' 00:15:50.890 18:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:50.890 18:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.457 18:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.457 18:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:51.716 18:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:15:51.716 18:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:51.975 [2024-07-15 18:30:37.426582] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:51.975 18:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:51.975 18:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:51.975 18:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:51.975 18:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:51.975 18:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:51.975 18:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:51.975 18:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:51.975 18:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:51.975 18:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:51.975 18:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:51.975 18:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.975 18:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.234 18:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:52.234 "name": "Existed_Raid", 00:15:52.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.234 "strip_size_kb": 0, 00:15:52.234 "state": "configuring", 00:15:52.234 "raid_level": "raid1", 00:15:52.234 "superblock": false, 00:15:52.234 "num_base_bdevs": 3, 00:15:52.234 "num_base_bdevs_discovered": 2, 00:15:52.234 "num_base_bdevs_operational": 3, 00:15:52.234 "base_bdevs_list": [ 00:15:52.234 { 00:15:52.234 "name": null, 00:15:52.234 "uuid": "003f5289-6b5d-4ad9-987e-7d1343d50a78", 00:15:52.234 "is_configured": false, 00:15:52.234 "data_offset": 0, 00:15:52.234 "data_size": 65536 00:15:52.234 }, 00:15:52.234 { 00:15:52.234 "name": "BaseBdev2", 00:15:52.234 "uuid": "4b92e097-2ebc-4b93-8653-22f8445477cd", 00:15:52.234 "is_configured": true, 00:15:52.234 "data_offset": 0, 00:15:52.234 "data_size": 65536 00:15:52.234 }, 00:15:52.234 { 00:15:52.234 "name": "BaseBdev3", 00:15:52.234 "uuid": "cfd0fc18-d0c2-4d7b-b09e-6ef12c9efea0", 00:15:52.234 "is_configured": true, 00:15:52.234 "data_offset": 0, 00:15:52.234 "data_size": 65536 00:15:52.234 } 00:15:52.234 ] 00:15:52.234 }' 00:15:52.234 18:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:52.234 18:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.802 18:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:52.803 18:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.061 18:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:15:53.061 18:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.061 18:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:53.360 18:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 003f5289-6b5d-4ad9-987e-7d1343d50a78 00:15:53.635 [2024-07-15 18:30:39.090133] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:53.635 [2024-07-15 18:30:39.090171] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1631fd0 00:15:53.635 [2024-07-15 18:30:39.090178] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:53.635 [2024-07-15 18:30:39.090395] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1632ee0 00:15:53.635 [2024-07-15 18:30:39.090526] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1631fd0 00:15:53.635 [2024-07-15 18:30:39.090535] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1631fd0 00:15:53.635 [2024-07-15 18:30:39.090701] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:53.635 NewBaseBdev 00:15:53.635 18:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:15:53.635 18:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:15:53.635 18:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:53.635 18:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:53.635 18:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:53.635 18:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:53.635 18:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:53.895 18:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:54.155 [ 00:15:54.155 { 00:15:54.155 "name": "NewBaseBdev", 00:15:54.155 "aliases": [ 00:15:54.155 "003f5289-6b5d-4ad9-987e-7d1343d50a78" 00:15:54.155 ], 00:15:54.155 "product_name": "Malloc disk", 00:15:54.155 "block_size": 512, 00:15:54.155 "num_blocks": 65536, 00:15:54.155 "uuid": "003f5289-6b5d-4ad9-987e-7d1343d50a78", 00:15:54.155 "assigned_rate_limits": { 00:15:54.155 "rw_ios_per_sec": 0, 00:15:54.155 "rw_mbytes_per_sec": 0, 00:15:54.155 "r_mbytes_per_sec": 0, 00:15:54.155 "w_mbytes_per_sec": 0 00:15:54.155 }, 00:15:54.155 "claimed": true, 00:15:54.155 "claim_type": "exclusive_write", 00:15:54.155 "zoned": false, 00:15:54.155 "supported_io_types": { 00:15:54.155 "read": true, 00:15:54.155 "write": true, 00:15:54.155 "unmap": true, 00:15:54.155 "flush": true, 00:15:54.155 "reset": true, 00:15:54.155 "nvme_admin": false, 00:15:54.155 "nvme_io": false, 00:15:54.155 "nvme_io_md": false, 00:15:54.155 "write_zeroes": true, 00:15:54.155 "zcopy": true, 00:15:54.155 "get_zone_info": false, 00:15:54.155 "zone_management": false, 00:15:54.155 "zone_append": false, 00:15:54.155 "compare": false, 00:15:54.155 "compare_and_write": false, 00:15:54.155 "abort": true, 00:15:54.155 "seek_hole": false, 00:15:54.155 "seek_data": false, 00:15:54.155 "copy": true, 00:15:54.155 "nvme_iov_md": false 00:15:54.155 }, 00:15:54.155 "memory_domains": [ 00:15:54.155 { 00:15:54.155 "dma_device_id": "system", 00:15:54.155 "dma_device_type": 1 00:15:54.155 }, 00:15:54.155 { 00:15:54.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.155 "dma_device_type": 2 00:15:54.155 } 00:15:54.155 ], 00:15:54.155 "driver_specific": {} 00:15:54.155 } 00:15:54.155 ] 00:15:54.155 18:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:54.155 18:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:54.155 18:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:54.155 18:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:54.155 18:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:54.155 18:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:54.155 18:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:54.155 18:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:54.155 18:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:54.155 18:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:54.155 18:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:54.155 18:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.155 18:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.414 18:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:54.414 "name": "Existed_Raid", 00:15:54.414 "uuid": "11d14539-1fdf-4b1f-8743-5297ea1c73dd", 00:15:54.414 "strip_size_kb": 0, 00:15:54.414 "state": "online", 00:15:54.414 "raid_level": "raid1", 00:15:54.414 "superblock": false, 00:15:54.414 "num_base_bdevs": 3, 00:15:54.414 "num_base_bdevs_discovered": 3, 00:15:54.414 "num_base_bdevs_operational": 3, 00:15:54.414 "base_bdevs_list": [ 00:15:54.414 { 00:15:54.414 "name": "NewBaseBdev", 00:15:54.414 "uuid": "003f5289-6b5d-4ad9-987e-7d1343d50a78", 00:15:54.414 "is_configured": true, 00:15:54.414 "data_offset": 0, 00:15:54.414 "data_size": 65536 00:15:54.414 }, 00:15:54.414 { 00:15:54.414 "name": "BaseBdev2", 00:15:54.414 "uuid": "4b92e097-2ebc-4b93-8653-22f8445477cd", 00:15:54.414 "is_configured": true, 00:15:54.414 "data_offset": 0, 00:15:54.414 "data_size": 65536 00:15:54.414 }, 00:15:54.414 { 00:15:54.414 "name": "BaseBdev3", 00:15:54.414 "uuid": "cfd0fc18-d0c2-4d7b-b09e-6ef12c9efea0", 00:15:54.414 "is_configured": true, 00:15:54.414 "data_offset": 0, 00:15:54.414 "data_size": 65536 00:15:54.414 } 00:15:54.414 ] 00:15:54.414 }' 00:15:54.414 18:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:54.414 18:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.981 18:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:54.981 18:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:54.981 18:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:54.981 18:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:54.981 18:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:54.982 18:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:54.982 18:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:54.982 18:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:55.240 [2024-07-15 18:30:40.706808] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:55.240 18:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:55.240 "name": "Existed_Raid", 00:15:55.240 "aliases": [ 00:15:55.240 "11d14539-1fdf-4b1f-8743-5297ea1c73dd" 00:15:55.240 ], 00:15:55.240 "product_name": "Raid Volume", 00:15:55.240 "block_size": 512, 00:15:55.240 "num_blocks": 65536, 00:15:55.240 "uuid": "11d14539-1fdf-4b1f-8743-5297ea1c73dd", 00:15:55.240 "assigned_rate_limits": { 00:15:55.240 "rw_ios_per_sec": 0, 00:15:55.240 "rw_mbytes_per_sec": 0, 00:15:55.240 "r_mbytes_per_sec": 0, 00:15:55.240 "w_mbytes_per_sec": 0 00:15:55.240 }, 00:15:55.240 "claimed": false, 00:15:55.240 "zoned": false, 00:15:55.240 "supported_io_types": { 00:15:55.240 "read": true, 00:15:55.240 "write": true, 00:15:55.240 "unmap": false, 00:15:55.240 "flush": false, 00:15:55.240 "reset": true, 00:15:55.240 "nvme_admin": false, 00:15:55.240 "nvme_io": false, 00:15:55.240 "nvme_io_md": false, 00:15:55.240 "write_zeroes": true, 00:15:55.240 "zcopy": false, 00:15:55.240 "get_zone_info": false, 00:15:55.240 "zone_management": false, 00:15:55.240 "zone_append": false, 00:15:55.240 "compare": false, 00:15:55.240 "compare_and_write": false, 00:15:55.240 "abort": false, 00:15:55.240 "seek_hole": false, 00:15:55.240 "seek_data": false, 00:15:55.240 "copy": false, 00:15:55.240 "nvme_iov_md": false 00:15:55.240 }, 00:15:55.240 "memory_domains": [ 00:15:55.240 { 00:15:55.240 "dma_device_id": "system", 00:15:55.240 "dma_device_type": 1 00:15:55.240 }, 00:15:55.240 { 00:15:55.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.240 "dma_device_type": 2 00:15:55.240 }, 00:15:55.240 { 00:15:55.240 "dma_device_id": "system", 00:15:55.240 "dma_device_type": 1 00:15:55.240 }, 00:15:55.240 { 00:15:55.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.240 "dma_device_type": 2 00:15:55.240 }, 00:15:55.240 { 00:15:55.240 "dma_device_id": "system", 00:15:55.240 "dma_device_type": 1 00:15:55.240 }, 00:15:55.240 { 00:15:55.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.240 "dma_device_type": 2 00:15:55.240 } 00:15:55.240 ], 00:15:55.240 "driver_specific": { 00:15:55.240 "raid": { 00:15:55.240 "uuid": "11d14539-1fdf-4b1f-8743-5297ea1c73dd", 00:15:55.240 "strip_size_kb": 0, 00:15:55.240 "state": "online", 00:15:55.240 "raid_level": "raid1", 00:15:55.240 "superblock": false, 00:15:55.240 "num_base_bdevs": 3, 00:15:55.240 "num_base_bdevs_discovered": 3, 00:15:55.240 "num_base_bdevs_operational": 3, 00:15:55.240 "base_bdevs_list": [ 00:15:55.240 { 00:15:55.240 "name": "NewBaseBdev", 00:15:55.240 "uuid": "003f5289-6b5d-4ad9-987e-7d1343d50a78", 00:15:55.240 "is_configured": true, 00:15:55.240 "data_offset": 0, 00:15:55.240 "data_size": 65536 00:15:55.240 }, 00:15:55.240 { 00:15:55.240 "name": "BaseBdev2", 00:15:55.240 "uuid": "4b92e097-2ebc-4b93-8653-22f8445477cd", 00:15:55.240 "is_configured": true, 00:15:55.240 "data_offset": 0, 00:15:55.240 "data_size": 65536 00:15:55.240 }, 00:15:55.240 { 00:15:55.240 "name": "BaseBdev3", 00:15:55.240 "uuid": "cfd0fc18-d0c2-4d7b-b09e-6ef12c9efea0", 00:15:55.240 "is_configured": true, 00:15:55.240 "data_offset": 0, 00:15:55.240 "data_size": 65536 00:15:55.240 } 00:15:55.240 ] 00:15:55.240 } 00:15:55.240 } 00:15:55.240 }' 00:15:55.240 18:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:55.240 18:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:55.240 BaseBdev2 00:15:55.240 BaseBdev3' 00:15:55.240 18:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:55.240 18:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:55.240 18:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:55.498 18:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:55.499 "name": "NewBaseBdev", 00:15:55.499 "aliases": [ 00:15:55.499 "003f5289-6b5d-4ad9-987e-7d1343d50a78" 00:15:55.499 ], 00:15:55.499 "product_name": "Malloc disk", 00:15:55.499 "block_size": 512, 00:15:55.499 "num_blocks": 65536, 00:15:55.499 "uuid": "003f5289-6b5d-4ad9-987e-7d1343d50a78", 00:15:55.499 "assigned_rate_limits": { 00:15:55.499 "rw_ios_per_sec": 0, 00:15:55.499 "rw_mbytes_per_sec": 0, 00:15:55.499 "r_mbytes_per_sec": 0, 00:15:55.499 "w_mbytes_per_sec": 0 00:15:55.499 }, 00:15:55.499 "claimed": true, 00:15:55.499 "claim_type": "exclusive_write", 00:15:55.499 "zoned": false, 00:15:55.499 "supported_io_types": { 00:15:55.499 "read": true, 00:15:55.499 "write": true, 00:15:55.499 "unmap": true, 00:15:55.499 "flush": true, 00:15:55.499 "reset": true, 00:15:55.499 "nvme_admin": false, 00:15:55.499 "nvme_io": false, 00:15:55.499 "nvme_io_md": false, 00:15:55.499 "write_zeroes": true, 00:15:55.499 "zcopy": true, 00:15:55.499 "get_zone_info": false, 00:15:55.499 "zone_management": false, 00:15:55.499 "zone_append": false, 00:15:55.499 "compare": false, 00:15:55.499 "compare_and_write": false, 00:15:55.499 "abort": true, 00:15:55.499 "seek_hole": false, 00:15:55.499 "seek_data": false, 00:15:55.499 "copy": true, 00:15:55.499 "nvme_iov_md": false 00:15:55.499 }, 00:15:55.499 "memory_domains": [ 00:15:55.499 { 00:15:55.499 "dma_device_id": "system", 00:15:55.499 "dma_device_type": 1 00:15:55.499 }, 00:15:55.499 { 00:15:55.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.499 "dma_device_type": 2 00:15:55.499 } 00:15:55.499 ], 00:15:55.499 "driver_specific": {} 00:15:55.499 }' 00:15:55.499 18:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:55.757 18:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:55.757 18:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:55.757 18:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:55.757 18:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:55.757 18:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:55.757 18:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:55.757 18:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:56.015 18:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:56.015 18:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:56.015 18:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:56.015 18:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:56.015 18:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:56.015 18:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:56.015 18:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:56.274 18:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:56.274 "name": "BaseBdev2", 00:15:56.274 "aliases": [ 00:15:56.274 "4b92e097-2ebc-4b93-8653-22f8445477cd" 00:15:56.274 ], 00:15:56.274 "product_name": "Malloc disk", 00:15:56.274 "block_size": 512, 00:15:56.274 "num_blocks": 65536, 00:15:56.274 "uuid": "4b92e097-2ebc-4b93-8653-22f8445477cd", 00:15:56.274 "assigned_rate_limits": { 00:15:56.274 "rw_ios_per_sec": 0, 00:15:56.274 "rw_mbytes_per_sec": 0, 00:15:56.274 "r_mbytes_per_sec": 0, 00:15:56.274 "w_mbytes_per_sec": 0 00:15:56.274 }, 00:15:56.274 "claimed": true, 00:15:56.274 "claim_type": "exclusive_write", 00:15:56.274 "zoned": false, 00:15:56.274 "supported_io_types": { 00:15:56.274 "read": true, 00:15:56.274 "write": true, 00:15:56.274 "unmap": true, 00:15:56.274 "flush": true, 00:15:56.274 "reset": true, 00:15:56.274 "nvme_admin": false, 00:15:56.274 "nvme_io": false, 00:15:56.274 "nvme_io_md": false, 00:15:56.274 "write_zeroes": true, 00:15:56.274 "zcopy": true, 00:15:56.274 "get_zone_info": false, 00:15:56.274 "zone_management": false, 00:15:56.274 "zone_append": false, 00:15:56.274 "compare": false, 00:15:56.274 "compare_and_write": false, 00:15:56.274 "abort": true, 00:15:56.274 "seek_hole": false, 00:15:56.274 "seek_data": false, 00:15:56.274 "copy": true, 00:15:56.274 "nvme_iov_md": false 00:15:56.274 }, 00:15:56.274 "memory_domains": [ 00:15:56.274 { 00:15:56.274 "dma_device_id": "system", 00:15:56.274 "dma_device_type": 1 00:15:56.274 }, 00:15:56.274 { 00:15:56.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.274 "dma_device_type": 2 00:15:56.274 } 00:15:56.274 ], 00:15:56.274 "driver_specific": {} 00:15:56.274 }' 00:15:56.274 18:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:56.274 18:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:56.274 18:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:56.274 18:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:56.274 18:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:56.533 18:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:56.533 18:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:56.533 18:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:56.533 18:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:56.533 18:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:56.533 18:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:56.533 18:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:56.533 18:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:56.533 18:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:56.533 18:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:56.791 18:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:56.791 "name": "BaseBdev3", 00:15:56.791 "aliases": [ 00:15:56.791 "cfd0fc18-d0c2-4d7b-b09e-6ef12c9efea0" 00:15:56.791 ], 00:15:56.791 "product_name": "Malloc disk", 00:15:56.791 "block_size": 512, 00:15:56.791 "num_blocks": 65536, 00:15:56.791 "uuid": "cfd0fc18-d0c2-4d7b-b09e-6ef12c9efea0", 00:15:56.791 "assigned_rate_limits": { 00:15:56.791 "rw_ios_per_sec": 0, 00:15:56.791 "rw_mbytes_per_sec": 0, 00:15:56.791 "r_mbytes_per_sec": 0, 00:15:56.791 "w_mbytes_per_sec": 0 00:15:56.791 }, 00:15:56.791 "claimed": true, 00:15:56.791 "claim_type": "exclusive_write", 00:15:56.791 "zoned": false, 00:15:56.791 "supported_io_types": { 00:15:56.791 "read": true, 00:15:56.791 "write": true, 00:15:56.791 "unmap": true, 00:15:56.791 "flush": true, 00:15:56.791 "reset": true, 00:15:56.791 "nvme_admin": false, 00:15:56.791 "nvme_io": false, 00:15:56.791 "nvme_io_md": false, 00:15:56.791 "write_zeroes": true, 00:15:56.791 "zcopy": true, 00:15:56.791 "get_zone_info": false, 00:15:56.791 "zone_management": false, 00:15:56.791 "zone_append": false, 00:15:56.791 "compare": false, 00:15:56.791 "compare_and_write": false, 00:15:56.791 "abort": true, 00:15:56.791 "seek_hole": false, 00:15:56.791 "seek_data": false, 00:15:56.791 "copy": true, 00:15:56.791 "nvme_iov_md": false 00:15:56.791 }, 00:15:56.791 "memory_domains": [ 00:15:56.791 { 00:15:56.791 "dma_device_id": "system", 00:15:56.791 "dma_device_type": 1 00:15:56.791 }, 00:15:56.791 { 00:15:56.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.791 "dma_device_type": 2 00:15:56.791 } 00:15:56.791 ], 00:15:56.791 "driver_specific": {} 00:15:56.791 }' 00:15:56.791 18:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.048 18:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.048 18:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:57.048 18:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.048 18:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.048 18:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:57.048 18:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.048 18:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.048 18:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:57.048 18:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.305 18:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.305 18:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:57.305 18:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:57.562 [2024-07-15 18:30:42.900560] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:57.562 [2024-07-15 18:30:42.900583] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:57.562 [2024-07-15 18:30:42.900631] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.562 [2024-07-15 18:30:42.900900] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:57.562 [2024-07-15 18:30:42.900910] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1631fd0 name Existed_Raid, state offline 00:15:57.562 18:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2813897 00:15:57.562 18:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2813897 ']' 00:15:57.562 18:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2813897 00:15:57.562 18:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:15:57.562 18:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:57.562 18:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2813897 00:15:57.562 18:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:57.562 18:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:57.562 18:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2813897' 00:15:57.562 killing process with pid 2813897 00:15:57.562 18:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2813897 00:15:57.562 [2024-07-15 18:30:42.965857] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:57.562 18:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2813897 00:15:57.562 [2024-07-15 18:30:42.991492] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:57.820 00:15:57.820 real 0m30.116s 00:15:57.820 user 0m56.566s 00:15:57.820 sys 0m4.141s 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.820 ************************************ 00:15:57.820 END TEST raid_state_function_test 00:15:57.820 ************************************ 00:15:57.820 18:30:43 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:57.820 18:30:43 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:15:57.820 18:30:43 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:57.820 18:30:43 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:57.820 18:30:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:57.820 ************************************ 00:15:57.820 START TEST raid_state_function_test_sb 00:15:57.820 ************************************ 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 true 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2819165 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2819165' 00:15:57.820 Process raid pid: 2819165 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2819165 /var/tmp/spdk-raid.sock 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2819165 ']' 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:57.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:57.820 18:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.820 [2024-07-15 18:30:43.291448] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:15:57.820 [2024-07-15 18:30:43.291512] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.077 [2024-07-15 18:30:43.392657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.077 [2024-07-15 18:30:43.491169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.077 [2024-07-15 18:30:43.546864] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.077 [2024-07-15 18:30:43.546889] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.335 18:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:58.335 18:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:15:58.335 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:58.593 [2024-07-15 18:30:43.976284] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:58.593 [2024-07-15 18:30:43.976321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:58.593 [2024-07-15 18:30:43.976330] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:58.593 [2024-07-15 18:30:43.976339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:58.593 [2024-07-15 18:30:43.976349] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:58.593 [2024-07-15 18:30:43.976357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:58.593 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:58.593 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:58.593 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:58.593 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:58.593 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:58.593 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:58.593 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:58.593 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:58.593 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:58.593 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:58.593 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.593 18:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.850 18:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:58.850 "name": "Existed_Raid", 00:15:58.850 "uuid": "e10cd746-6138-4733-9e62-c450a46bc364", 00:15:58.850 "strip_size_kb": 0, 00:15:58.850 "state": "configuring", 00:15:58.850 "raid_level": "raid1", 00:15:58.850 "superblock": true, 00:15:58.850 "num_base_bdevs": 3, 00:15:58.850 "num_base_bdevs_discovered": 0, 00:15:58.850 "num_base_bdevs_operational": 3, 00:15:58.850 "base_bdevs_list": [ 00:15:58.850 { 00:15:58.850 "name": "BaseBdev1", 00:15:58.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.850 "is_configured": false, 00:15:58.850 "data_offset": 0, 00:15:58.850 "data_size": 0 00:15:58.850 }, 00:15:58.850 { 00:15:58.850 "name": "BaseBdev2", 00:15:58.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.850 "is_configured": false, 00:15:58.850 "data_offset": 0, 00:15:58.850 "data_size": 0 00:15:58.850 }, 00:15:58.850 { 00:15:58.850 "name": "BaseBdev3", 00:15:58.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.850 "is_configured": false, 00:15:58.850 "data_offset": 0, 00:15:58.850 "data_size": 0 00:15:58.850 } 00:15:58.850 ] 00:15:58.850 }' 00:15:58.850 18:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:58.850 18:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.415 18:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:59.673 [2024-07-15 18:30:45.119202] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:59.673 [2024-07-15 18:30:45.119229] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11e1ba0 name Existed_Raid, state configuring 00:15:59.673 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:59.950 [2024-07-15 18:30:45.371906] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:59.950 [2024-07-15 18:30:45.371940] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:59.950 [2024-07-15 18:30:45.371954] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:59.950 [2024-07-15 18:30:45.371963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:59.950 [2024-07-15 18:30:45.371970] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:59.950 [2024-07-15 18:30:45.371978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:59.950 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:00.209 [2024-07-15 18:30:45.634084] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:00.209 BaseBdev1 00:16:00.209 18:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:00.209 18:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:00.209 18:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:00.209 18:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:00.209 18:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:00.209 18:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:00.209 18:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:00.467 18:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:00.725 [ 00:16:00.725 { 00:16:00.725 "name": "BaseBdev1", 00:16:00.725 "aliases": [ 00:16:00.725 "e1a66b1a-a887-4a46-9133-7f97cd30221f" 00:16:00.725 ], 00:16:00.725 "product_name": "Malloc disk", 00:16:00.725 "block_size": 512, 00:16:00.725 "num_blocks": 65536, 00:16:00.725 "uuid": "e1a66b1a-a887-4a46-9133-7f97cd30221f", 00:16:00.725 "assigned_rate_limits": { 00:16:00.725 "rw_ios_per_sec": 0, 00:16:00.725 "rw_mbytes_per_sec": 0, 00:16:00.725 "r_mbytes_per_sec": 0, 00:16:00.725 "w_mbytes_per_sec": 0 00:16:00.725 }, 00:16:00.725 "claimed": true, 00:16:00.725 "claim_type": "exclusive_write", 00:16:00.725 "zoned": false, 00:16:00.725 "supported_io_types": { 00:16:00.725 "read": true, 00:16:00.725 "write": true, 00:16:00.725 "unmap": true, 00:16:00.725 "flush": true, 00:16:00.725 "reset": true, 00:16:00.725 "nvme_admin": false, 00:16:00.725 "nvme_io": false, 00:16:00.725 "nvme_io_md": false, 00:16:00.725 "write_zeroes": true, 00:16:00.725 "zcopy": true, 00:16:00.725 "get_zone_info": false, 00:16:00.725 "zone_management": false, 00:16:00.725 "zone_append": false, 00:16:00.725 "compare": false, 00:16:00.725 "compare_and_write": false, 00:16:00.725 "abort": true, 00:16:00.725 "seek_hole": false, 00:16:00.725 "seek_data": false, 00:16:00.725 "copy": true, 00:16:00.725 "nvme_iov_md": false 00:16:00.725 }, 00:16:00.725 "memory_domains": [ 00:16:00.725 { 00:16:00.725 "dma_device_id": "system", 00:16:00.725 "dma_device_type": 1 00:16:00.725 }, 00:16:00.725 { 00:16:00.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.725 "dma_device_type": 2 00:16:00.725 } 00:16:00.725 ], 00:16:00.725 "driver_specific": {} 00:16:00.725 } 00:16:00.725 ] 00:16:00.725 18:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:00.725 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:00.725 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:00.725 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:00.725 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:00.725 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:00.725 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:00.725 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:00.725 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:00.725 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:00.725 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:00.725 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.725 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.984 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:00.984 "name": "Existed_Raid", 00:16:00.984 "uuid": "95f34c78-e365-4fa7-a307-91915f860356", 00:16:00.984 "strip_size_kb": 0, 00:16:00.984 "state": "configuring", 00:16:00.984 "raid_level": "raid1", 00:16:00.984 "superblock": true, 00:16:00.984 "num_base_bdevs": 3, 00:16:00.984 "num_base_bdevs_discovered": 1, 00:16:00.984 "num_base_bdevs_operational": 3, 00:16:00.984 "base_bdevs_list": [ 00:16:00.984 { 00:16:00.984 "name": "BaseBdev1", 00:16:00.984 "uuid": "e1a66b1a-a887-4a46-9133-7f97cd30221f", 00:16:00.984 "is_configured": true, 00:16:00.984 "data_offset": 2048, 00:16:00.984 "data_size": 63488 00:16:00.984 }, 00:16:00.984 { 00:16:00.984 "name": "BaseBdev2", 00:16:00.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.984 "is_configured": false, 00:16:00.984 "data_offset": 0, 00:16:00.984 "data_size": 0 00:16:00.984 }, 00:16:00.984 { 00:16:00.984 "name": "BaseBdev3", 00:16:00.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.984 "is_configured": false, 00:16:00.984 "data_offset": 0, 00:16:00.984 "data_size": 0 00:16:00.984 } 00:16:00.984 ] 00:16:00.984 }' 00:16:00.984 18:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:00.984 18:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.549 18:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:01.806 [2024-07-15 18:30:47.266480] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:01.806 [2024-07-15 18:30:47.266518] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11e1470 name Existed_Raid, state configuring 00:16:01.806 18:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:02.064 [2024-07-15 18:30:47.527230] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:02.064 [2024-07-15 18:30:47.528791] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:02.064 [2024-07-15 18:30:47.528821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:02.064 [2024-07-15 18:30:47.528829] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:02.064 [2024-07-15 18:30:47.528837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:02.064 18:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:02.064 18:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:02.064 18:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:02.064 18:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:02.064 18:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:02.064 18:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:02.064 18:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:02.064 18:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:02.064 18:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:02.064 18:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:02.064 18:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:02.064 18:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:02.064 18:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.064 18:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.322 18:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:02.322 "name": "Existed_Raid", 00:16:02.322 "uuid": "3c03f77c-02ab-443b-99cb-ef24af8616c7", 00:16:02.322 "strip_size_kb": 0, 00:16:02.322 "state": "configuring", 00:16:02.322 "raid_level": "raid1", 00:16:02.322 "superblock": true, 00:16:02.322 "num_base_bdevs": 3, 00:16:02.322 "num_base_bdevs_discovered": 1, 00:16:02.322 "num_base_bdevs_operational": 3, 00:16:02.322 "base_bdevs_list": [ 00:16:02.322 { 00:16:02.322 "name": "BaseBdev1", 00:16:02.322 "uuid": "e1a66b1a-a887-4a46-9133-7f97cd30221f", 00:16:02.322 "is_configured": true, 00:16:02.322 "data_offset": 2048, 00:16:02.322 "data_size": 63488 00:16:02.322 }, 00:16:02.322 { 00:16:02.322 "name": "BaseBdev2", 00:16:02.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.322 "is_configured": false, 00:16:02.322 "data_offset": 0, 00:16:02.322 "data_size": 0 00:16:02.322 }, 00:16:02.322 { 00:16:02.322 "name": "BaseBdev3", 00:16:02.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.322 "is_configured": false, 00:16:02.322 "data_offset": 0, 00:16:02.322 "data_size": 0 00:16:02.322 } 00:16:02.322 ] 00:16:02.322 }' 00:16:02.322 18:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:02.322 18:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.888 18:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:03.146 [2024-07-15 18:30:48.677437] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:03.146 BaseBdev2 00:16:03.146 18:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:03.146 18:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:03.146 18:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:03.146 18:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:03.146 18:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:03.146 18:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:03.403 18:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:03.403 18:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:03.661 [ 00:16:03.661 { 00:16:03.661 "name": "BaseBdev2", 00:16:03.661 "aliases": [ 00:16:03.661 "a7f6f1ce-e424-49d2-a758-b562406256e3" 00:16:03.661 ], 00:16:03.661 "product_name": "Malloc disk", 00:16:03.661 "block_size": 512, 00:16:03.661 "num_blocks": 65536, 00:16:03.661 "uuid": "a7f6f1ce-e424-49d2-a758-b562406256e3", 00:16:03.661 "assigned_rate_limits": { 00:16:03.661 "rw_ios_per_sec": 0, 00:16:03.661 "rw_mbytes_per_sec": 0, 00:16:03.661 "r_mbytes_per_sec": 0, 00:16:03.661 "w_mbytes_per_sec": 0 00:16:03.661 }, 00:16:03.661 "claimed": true, 00:16:03.661 "claim_type": "exclusive_write", 00:16:03.661 "zoned": false, 00:16:03.661 "supported_io_types": { 00:16:03.661 "read": true, 00:16:03.661 "write": true, 00:16:03.661 "unmap": true, 00:16:03.661 "flush": true, 00:16:03.661 "reset": true, 00:16:03.661 "nvme_admin": false, 00:16:03.661 "nvme_io": false, 00:16:03.661 "nvme_io_md": false, 00:16:03.661 "write_zeroes": true, 00:16:03.661 "zcopy": true, 00:16:03.661 "get_zone_info": false, 00:16:03.661 "zone_management": false, 00:16:03.661 "zone_append": false, 00:16:03.661 "compare": false, 00:16:03.661 "compare_and_write": false, 00:16:03.661 "abort": true, 00:16:03.661 "seek_hole": false, 00:16:03.661 "seek_data": false, 00:16:03.661 "copy": true, 00:16:03.661 "nvme_iov_md": false 00:16:03.661 }, 00:16:03.661 "memory_domains": [ 00:16:03.661 { 00:16:03.661 "dma_device_id": "system", 00:16:03.661 "dma_device_type": 1 00:16:03.661 }, 00:16:03.661 { 00:16:03.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.661 "dma_device_type": 2 00:16:03.661 } 00:16:03.661 ], 00:16:03.661 "driver_specific": {} 00:16:03.661 } 00:16:03.661 ] 00:16:03.661 18:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:03.661 18:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:03.661 18:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:03.661 18:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:03.661 18:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:03.661 18:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:03.661 18:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:03.661 18:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:03.661 18:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:03.661 18:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:03.661 18:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:03.661 18:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:03.661 18:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:03.661 18:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.661 18:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.919 18:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:03.919 "name": "Existed_Raid", 00:16:03.919 "uuid": "3c03f77c-02ab-443b-99cb-ef24af8616c7", 00:16:03.919 "strip_size_kb": 0, 00:16:03.919 "state": "configuring", 00:16:03.919 "raid_level": "raid1", 00:16:03.919 "superblock": true, 00:16:03.919 "num_base_bdevs": 3, 00:16:03.919 "num_base_bdevs_discovered": 2, 00:16:03.919 "num_base_bdevs_operational": 3, 00:16:03.919 "base_bdevs_list": [ 00:16:03.919 { 00:16:03.919 "name": "BaseBdev1", 00:16:03.919 "uuid": "e1a66b1a-a887-4a46-9133-7f97cd30221f", 00:16:03.919 "is_configured": true, 00:16:03.919 "data_offset": 2048, 00:16:03.919 "data_size": 63488 00:16:03.919 }, 00:16:03.919 { 00:16:03.919 "name": "BaseBdev2", 00:16:03.919 "uuid": "a7f6f1ce-e424-49d2-a758-b562406256e3", 00:16:03.919 "is_configured": true, 00:16:03.919 "data_offset": 2048, 00:16:03.919 "data_size": 63488 00:16:03.919 }, 00:16:03.919 { 00:16:03.919 "name": "BaseBdev3", 00:16:03.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.919 "is_configured": false, 00:16:03.919 "data_offset": 0, 00:16:03.919 "data_size": 0 00:16:03.919 } 00:16:03.919 ] 00:16:03.919 }' 00:16:03.919 18:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:03.919 18:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.851 18:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:04.851 [2024-07-15 18:30:50.333083] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:04.851 [2024-07-15 18:30:50.333240] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x11e2360 00:16:04.851 [2024-07-15 18:30:50.333253] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:04.851 [2024-07-15 18:30:50.333435] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1386660 00:16:04.851 [2024-07-15 18:30:50.333560] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x11e2360 00:16:04.851 [2024-07-15 18:30:50.333574] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x11e2360 00:16:04.851 [2024-07-15 18:30:50.333674] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.851 BaseBdev3 00:16:04.851 18:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:04.851 18:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:04.851 18:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:04.851 18:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:04.851 18:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:04.852 18:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:04.852 18:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:05.109 18:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:05.367 [ 00:16:05.367 { 00:16:05.367 "name": "BaseBdev3", 00:16:05.367 "aliases": [ 00:16:05.367 "a82ea391-49af-4c72-a3b9-412ed7b73dc6" 00:16:05.367 ], 00:16:05.367 "product_name": "Malloc disk", 00:16:05.367 "block_size": 512, 00:16:05.367 "num_blocks": 65536, 00:16:05.367 "uuid": "a82ea391-49af-4c72-a3b9-412ed7b73dc6", 00:16:05.367 "assigned_rate_limits": { 00:16:05.367 "rw_ios_per_sec": 0, 00:16:05.367 "rw_mbytes_per_sec": 0, 00:16:05.367 "r_mbytes_per_sec": 0, 00:16:05.367 "w_mbytes_per_sec": 0 00:16:05.367 }, 00:16:05.367 "claimed": true, 00:16:05.367 "claim_type": "exclusive_write", 00:16:05.367 "zoned": false, 00:16:05.367 "supported_io_types": { 00:16:05.367 "read": true, 00:16:05.367 "write": true, 00:16:05.367 "unmap": true, 00:16:05.367 "flush": true, 00:16:05.367 "reset": true, 00:16:05.367 "nvme_admin": false, 00:16:05.367 "nvme_io": false, 00:16:05.367 "nvme_io_md": false, 00:16:05.367 "write_zeroes": true, 00:16:05.367 "zcopy": true, 00:16:05.367 "get_zone_info": false, 00:16:05.367 "zone_management": false, 00:16:05.367 "zone_append": false, 00:16:05.367 "compare": false, 00:16:05.367 "compare_and_write": false, 00:16:05.367 "abort": true, 00:16:05.367 "seek_hole": false, 00:16:05.367 "seek_data": false, 00:16:05.367 "copy": true, 00:16:05.367 "nvme_iov_md": false 00:16:05.367 }, 00:16:05.367 "memory_domains": [ 00:16:05.367 { 00:16:05.367 "dma_device_id": "system", 00:16:05.367 "dma_device_type": 1 00:16:05.367 }, 00:16:05.367 { 00:16:05.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.367 "dma_device_type": 2 00:16:05.367 } 00:16:05.367 ], 00:16:05.367 "driver_specific": {} 00:16:05.367 } 00:16:05.367 ] 00:16:05.367 18:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:05.367 18:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:05.367 18:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:05.367 18:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:05.367 18:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:05.367 18:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:05.367 18:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:05.367 18:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:05.367 18:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:05.367 18:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:05.367 18:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:05.367 18:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:05.367 18:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:05.367 18:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.367 18:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.625 18:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:05.625 "name": "Existed_Raid", 00:16:05.625 "uuid": "3c03f77c-02ab-443b-99cb-ef24af8616c7", 00:16:05.625 "strip_size_kb": 0, 00:16:05.625 "state": "online", 00:16:05.625 "raid_level": "raid1", 00:16:05.625 "superblock": true, 00:16:05.625 "num_base_bdevs": 3, 00:16:05.625 "num_base_bdevs_discovered": 3, 00:16:05.625 "num_base_bdevs_operational": 3, 00:16:05.625 "base_bdevs_list": [ 00:16:05.625 { 00:16:05.625 "name": "BaseBdev1", 00:16:05.625 "uuid": "e1a66b1a-a887-4a46-9133-7f97cd30221f", 00:16:05.625 "is_configured": true, 00:16:05.625 "data_offset": 2048, 00:16:05.625 "data_size": 63488 00:16:05.625 }, 00:16:05.625 { 00:16:05.625 "name": "BaseBdev2", 00:16:05.625 "uuid": "a7f6f1ce-e424-49d2-a758-b562406256e3", 00:16:05.625 "is_configured": true, 00:16:05.625 "data_offset": 2048, 00:16:05.625 "data_size": 63488 00:16:05.625 }, 00:16:05.625 { 00:16:05.625 "name": "BaseBdev3", 00:16:05.625 "uuid": "a82ea391-49af-4c72-a3b9-412ed7b73dc6", 00:16:05.625 "is_configured": true, 00:16:05.625 "data_offset": 2048, 00:16:05.625 "data_size": 63488 00:16:05.625 } 00:16:05.625 ] 00:16:05.625 }' 00:16:05.625 18:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:05.625 18:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.562 18:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:06.562 18:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:06.562 18:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:06.562 18:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:06.562 18:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:06.562 18:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:06.562 18:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:06.562 18:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:06.562 [2024-07-15 18:30:51.997907] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:06.562 18:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:06.562 "name": "Existed_Raid", 00:16:06.562 "aliases": [ 00:16:06.562 "3c03f77c-02ab-443b-99cb-ef24af8616c7" 00:16:06.562 ], 00:16:06.562 "product_name": "Raid Volume", 00:16:06.562 "block_size": 512, 00:16:06.562 "num_blocks": 63488, 00:16:06.562 "uuid": "3c03f77c-02ab-443b-99cb-ef24af8616c7", 00:16:06.562 "assigned_rate_limits": { 00:16:06.562 "rw_ios_per_sec": 0, 00:16:06.562 "rw_mbytes_per_sec": 0, 00:16:06.562 "r_mbytes_per_sec": 0, 00:16:06.562 "w_mbytes_per_sec": 0 00:16:06.562 }, 00:16:06.562 "claimed": false, 00:16:06.562 "zoned": false, 00:16:06.562 "supported_io_types": { 00:16:06.562 "read": true, 00:16:06.562 "write": true, 00:16:06.562 "unmap": false, 00:16:06.562 "flush": false, 00:16:06.562 "reset": true, 00:16:06.562 "nvme_admin": false, 00:16:06.562 "nvme_io": false, 00:16:06.562 "nvme_io_md": false, 00:16:06.562 "write_zeroes": true, 00:16:06.562 "zcopy": false, 00:16:06.562 "get_zone_info": false, 00:16:06.562 "zone_management": false, 00:16:06.562 "zone_append": false, 00:16:06.562 "compare": false, 00:16:06.562 "compare_and_write": false, 00:16:06.562 "abort": false, 00:16:06.562 "seek_hole": false, 00:16:06.562 "seek_data": false, 00:16:06.562 "copy": false, 00:16:06.562 "nvme_iov_md": false 00:16:06.562 }, 00:16:06.562 "memory_domains": [ 00:16:06.562 { 00:16:06.562 "dma_device_id": "system", 00:16:06.562 "dma_device_type": 1 00:16:06.562 }, 00:16:06.562 { 00:16:06.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.563 "dma_device_type": 2 00:16:06.563 }, 00:16:06.563 { 00:16:06.563 "dma_device_id": "system", 00:16:06.563 "dma_device_type": 1 00:16:06.563 }, 00:16:06.563 { 00:16:06.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.563 "dma_device_type": 2 00:16:06.563 }, 00:16:06.563 { 00:16:06.563 "dma_device_id": "system", 00:16:06.563 "dma_device_type": 1 00:16:06.563 }, 00:16:06.563 { 00:16:06.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.563 "dma_device_type": 2 00:16:06.563 } 00:16:06.563 ], 00:16:06.563 "driver_specific": { 00:16:06.563 "raid": { 00:16:06.563 "uuid": "3c03f77c-02ab-443b-99cb-ef24af8616c7", 00:16:06.563 "strip_size_kb": 0, 00:16:06.563 "state": "online", 00:16:06.563 "raid_level": "raid1", 00:16:06.563 "superblock": true, 00:16:06.563 "num_base_bdevs": 3, 00:16:06.563 "num_base_bdevs_discovered": 3, 00:16:06.563 "num_base_bdevs_operational": 3, 00:16:06.563 "base_bdevs_list": [ 00:16:06.563 { 00:16:06.563 "name": "BaseBdev1", 00:16:06.563 "uuid": "e1a66b1a-a887-4a46-9133-7f97cd30221f", 00:16:06.563 "is_configured": true, 00:16:06.563 "data_offset": 2048, 00:16:06.563 "data_size": 63488 00:16:06.563 }, 00:16:06.563 { 00:16:06.563 "name": "BaseBdev2", 00:16:06.563 "uuid": "a7f6f1ce-e424-49d2-a758-b562406256e3", 00:16:06.563 "is_configured": true, 00:16:06.563 "data_offset": 2048, 00:16:06.563 "data_size": 63488 00:16:06.563 }, 00:16:06.563 { 00:16:06.563 "name": "BaseBdev3", 00:16:06.563 "uuid": "a82ea391-49af-4c72-a3b9-412ed7b73dc6", 00:16:06.563 "is_configured": true, 00:16:06.563 "data_offset": 2048, 00:16:06.563 "data_size": 63488 00:16:06.563 } 00:16:06.563 ] 00:16:06.563 } 00:16:06.563 } 00:16:06.563 }' 00:16:06.563 18:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:06.563 18:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:06.563 BaseBdev2 00:16:06.563 BaseBdev3' 00:16:06.563 18:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:06.563 18:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:06.563 18:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:06.822 18:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:06.822 "name": "BaseBdev1", 00:16:06.822 "aliases": [ 00:16:06.822 "e1a66b1a-a887-4a46-9133-7f97cd30221f" 00:16:06.822 ], 00:16:06.822 "product_name": "Malloc disk", 00:16:06.822 "block_size": 512, 00:16:06.822 "num_blocks": 65536, 00:16:06.822 "uuid": "e1a66b1a-a887-4a46-9133-7f97cd30221f", 00:16:06.822 "assigned_rate_limits": { 00:16:06.822 "rw_ios_per_sec": 0, 00:16:06.822 "rw_mbytes_per_sec": 0, 00:16:06.822 "r_mbytes_per_sec": 0, 00:16:06.822 "w_mbytes_per_sec": 0 00:16:06.822 }, 00:16:06.822 "claimed": true, 00:16:06.822 "claim_type": "exclusive_write", 00:16:06.822 "zoned": false, 00:16:06.822 "supported_io_types": { 00:16:06.822 "read": true, 00:16:06.822 "write": true, 00:16:06.822 "unmap": true, 00:16:06.822 "flush": true, 00:16:06.822 "reset": true, 00:16:06.822 "nvme_admin": false, 00:16:06.822 "nvme_io": false, 00:16:06.822 "nvme_io_md": false, 00:16:06.822 "write_zeroes": true, 00:16:06.822 "zcopy": true, 00:16:06.822 "get_zone_info": false, 00:16:06.822 "zone_management": false, 00:16:06.822 "zone_append": false, 00:16:06.822 "compare": false, 00:16:06.822 "compare_and_write": false, 00:16:06.822 "abort": true, 00:16:06.822 "seek_hole": false, 00:16:06.822 "seek_data": false, 00:16:06.822 "copy": true, 00:16:06.822 "nvme_iov_md": false 00:16:06.822 }, 00:16:06.822 "memory_domains": [ 00:16:06.822 { 00:16:06.822 "dma_device_id": "system", 00:16:06.822 "dma_device_type": 1 00:16:06.822 }, 00:16:06.822 { 00:16:06.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.822 "dma_device_type": 2 00:16:06.822 } 00:16:06.822 ], 00:16:06.822 "driver_specific": {} 00:16:06.822 }' 00:16:06.822 18:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:07.081 18:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:07.081 18:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:07.081 18:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:07.081 18:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:07.081 18:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:07.081 18:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:07.340 18:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:07.340 18:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:07.340 18:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:07.340 18:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:07.340 18:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:07.340 18:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:07.340 18:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:07.340 18:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:07.976 18:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:07.976 "name": "BaseBdev2", 00:16:07.976 "aliases": [ 00:16:07.976 "a7f6f1ce-e424-49d2-a758-b562406256e3" 00:16:07.976 ], 00:16:07.976 "product_name": "Malloc disk", 00:16:07.976 "block_size": 512, 00:16:07.976 "num_blocks": 65536, 00:16:07.976 "uuid": "a7f6f1ce-e424-49d2-a758-b562406256e3", 00:16:07.976 "assigned_rate_limits": { 00:16:07.976 "rw_ios_per_sec": 0, 00:16:07.976 "rw_mbytes_per_sec": 0, 00:16:07.976 "r_mbytes_per_sec": 0, 00:16:07.976 "w_mbytes_per_sec": 0 00:16:07.976 }, 00:16:07.976 "claimed": true, 00:16:07.976 "claim_type": "exclusive_write", 00:16:07.976 "zoned": false, 00:16:07.976 "supported_io_types": { 00:16:07.976 "read": true, 00:16:07.976 "write": true, 00:16:07.976 "unmap": true, 00:16:07.976 "flush": true, 00:16:07.976 "reset": true, 00:16:07.976 "nvme_admin": false, 00:16:07.976 "nvme_io": false, 00:16:07.976 "nvme_io_md": false, 00:16:07.976 "write_zeroes": true, 00:16:07.976 "zcopy": true, 00:16:07.976 "get_zone_info": false, 00:16:07.976 "zone_management": false, 00:16:07.976 "zone_append": false, 00:16:07.976 "compare": false, 00:16:07.976 "compare_and_write": false, 00:16:07.976 "abort": true, 00:16:07.976 "seek_hole": false, 00:16:07.976 "seek_data": false, 00:16:07.976 "copy": true, 00:16:07.976 "nvme_iov_md": false 00:16:07.976 }, 00:16:07.976 "memory_domains": [ 00:16:07.976 { 00:16:07.976 "dma_device_id": "system", 00:16:07.976 "dma_device_type": 1 00:16:07.976 }, 00:16:07.976 { 00:16:07.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.976 "dma_device_type": 2 00:16:07.976 } 00:16:07.976 ], 00:16:07.976 "driver_specific": {} 00:16:07.976 }' 00:16:07.976 18:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:07.976 18:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:07.976 18:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:07.976 18:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:08.235 18:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:08.235 18:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:08.235 18:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:08.235 18:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:08.235 18:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:08.235 18:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:08.494 18:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:08.494 18:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:08.494 18:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:08.494 18:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:08.494 18:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:09.061 18:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:09.061 "name": "BaseBdev3", 00:16:09.061 "aliases": [ 00:16:09.061 "a82ea391-49af-4c72-a3b9-412ed7b73dc6" 00:16:09.061 ], 00:16:09.061 "product_name": "Malloc disk", 00:16:09.061 "block_size": 512, 00:16:09.061 "num_blocks": 65536, 00:16:09.061 "uuid": "a82ea391-49af-4c72-a3b9-412ed7b73dc6", 00:16:09.061 "assigned_rate_limits": { 00:16:09.061 "rw_ios_per_sec": 0, 00:16:09.061 "rw_mbytes_per_sec": 0, 00:16:09.061 "r_mbytes_per_sec": 0, 00:16:09.061 "w_mbytes_per_sec": 0 00:16:09.061 }, 00:16:09.061 "claimed": true, 00:16:09.061 "claim_type": "exclusive_write", 00:16:09.061 "zoned": false, 00:16:09.061 "supported_io_types": { 00:16:09.061 "read": true, 00:16:09.061 "write": true, 00:16:09.061 "unmap": true, 00:16:09.061 "flush": true, 00:16:09.061 "reset": true, 00:16:09.061 "nvme_admin": false, 00:16:09.061 "nvme_io": false, 00:16:09.061 "nvme_io_md": false, 00:16:09.061 "write_zeroes": true, 00:16:09.061 "zcopy": true, 00:16:09.061 "get_zone_info": false, 00:16:09.061 "zone_management": false, 00:16:09.061 "zone_append": false, 00:16:09.061 "compare": false, 00:16:09.061 "compare_and_write": false, 00:16:09.061 "abort": true, 00:16:09.061 "seek_hole": false, 00:16:09.061 "seek_data": false, 00:16:09.061 "copy": true, 00:16:09.061 "nvme_iov_md": false 00:16:09.061 }, 00:16:09.061 "memory_domains": [ 00:16:09.061 { 00:16:09.061 "dma_device_id": "system", 00:16:09.061 "dma_device_type": 1 00:16:09.061 }, 00:16:09.061 { 00:16:09.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.061 "dma_device_type": 2 00:16:09.061 } 00:16:09.062 ], 00:16:09.062 "driver_specific": {} 00:16:09.062 }' 00:16:09.062 18:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:09.062 18:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:09.062 18:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:09.062 18:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:09.321 18:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:09.321 18:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:09.321 18:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:09.321 18:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:09.321 18:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:09.321 18:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:09.321 18:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:09.581 18:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:09.581 18:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:10.149 [2024-07-15 18:30:55.406819] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:10.149 18:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:10.149 18:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:16:10.149 18:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:10.149 18:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:16:10.149 18:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:16:10.149 18:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:10.149 18:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:10.149 18:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:10.149 18:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:10.149 18:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:10.149 18:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:10.149 18:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:10.149 18:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:10.149 18:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:10.149 18:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:10.149 18:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.149 18:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.408 18:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:10.408 "name": "Existed_Raid", 00:16:10.408 "uuid": "3c03f77c-02ab-443b-99cb-ef24af8616c7", 00:16:10.408 "strip_size_kb": 0, 00:16:10.408 "state": "online", 00:16:10.408 "raid_level": "raid1", 00:16:10.408 "superblock": true, 00:16:10.408 "num_base_bdevs": 3, 00:16:10.408 "num_base_bdevs_discovered": 2, 00:16:10.408 "num_base_bdevs_operational": 2, 00:16:10.408 "base_bdevs_list": [ 00:16:10.408 { 00:16:10.408 "name": null, 00:16:10.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.408 "is_configured": false, 00:16:10.408 "data_offset": 2048, 00:16:10.408 "data_size": 63488 00:16:10.408 }, 00:16:10.408 { 00:16:10.408 "name": "BaseBdev2", 00:16:10.408 "uuid": "a7f6f1ce-e424-49d2-a758-b562406256e3", 00:16:10.408 "is_configured": true, 00:16:10.408 "data_offset": 2048, 00:16:10.408 "data_size": 63488 00:16:10.408 }, 00:16:10.408 { 00:16:10.408 "name": "BaseBdev3", 00:16:10.408 "uuid": "a82ea391-49af-4c72-a3b9-412ed7b73dc6", 00:16:10.408 "is_configured": true, 00:16:10.408 "data_offset": 2048, 00:16:10.408 "data_size": 63488 00:16:10.408 } 00:16:10.408 ] 00:16:10.408 }' 00:16:10.408 18:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:10.408 18:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.344 18:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:11.344 18:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:11.344 18:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.344 18:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:11.344 18:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:11.344 18:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:11.344 18:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:11.602 [2024-07-15 18:30:57.056443] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:11.602 18:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:11.602 18:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:11.602 18:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.602 18:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:11.860 18:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:11.860 18:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:11.860 18:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:12.119 [2024-07-15 18:30:57.636744] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:12.119 [2024-07-15 18:30:57.636830] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:12.119 [2024-07-15 18:30:57.647776] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:12.119 [2024-07-15 18:30:57.647810] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:12.119 [2024-07-15 18:30:57.647819] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11e2360 name Existed_Raid, state offline 00:16:12.119 18:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:12.119 18:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:12.119 18:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.119 18:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:12.687 18:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:12.687 18:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:12.687 18:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:16:12.687 18:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:12.687 18:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:12.687 18:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:12.946 BaseBdev2 00:16:12.946 18:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:12.946 18:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:12.946 18:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:12.946 18:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:12.946 18:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:12.946 18:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:12.946 18:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:13.513 18:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:13.771 [ 00:16:13.771 { 00:16:13.771 "name": "BaseBdev2", 00:16:13.771 "aliases": [ 00:16:13.771 "70fc9f48-d4bb-4366-802b-c96031e3cea1" 00:16:13.771 ], 00:16:13.771 "product_name": "Malloc disk", 00:16:13.771 "block_size": 512, 00:16:13.771 "num_blocks": 65536, 00:16:13.771 "uuid": "70fc9f48-d4bb-4366-802b-c96031e3cea1", 00:16:13.771 "assigned_rate_limits": { 00:16:13.771 "rw_ios_per_sec": 0, 00:16:13.771 "rw_mbytes_per_sec": 0, 00:16:13.771 "r_mbytes_per_sec": 0, 00:16:13.771 "w_mbytes_per_sec": 0 00:16:13.771 }, 00:16:13.771 "claimed": false, 00:16:13.771 "zoned": false, 00:16:13.771 "supported_io_types": { 00:16:13.771 "read": true, 00:16:13.771 "write": true, 00:16:13.771 "unmap": true, 00:16:13.771 "flush": true, 00:16:13.771 "reset": true, 00:16:13.771 "nvme_admin": false, 00:16:13.771 "nvme_io": false, 00:16:13.771 "nvme_io_md": false, 00:16:13.771 "write_zeroes": true, 00:16:13.771 "zcopy": true, 00:16:13.771 "get_zone_info": false, 00:16:13.771 "zone_management": false, 00:16:13.771 "zone_append": false, 00:16:13.771 "compare": false, 00:16:13.771 "compare_and_write": false, 00:16:13.771 "abort": true, 00:16:13.771 "seek_hole": false, 00:16:13.771 "seek_data": false, 00:16:13.771 "copy": true, 00:16:13.771 "nvme_iov_md": false 00:16:13.771 }, 00:16:13.771 "memory_domains": [ 00:16:13.771 { 00:16:13.771 "dma_device_id": "system", 00:16:13.771 "dma_device_type": 1 00:16:13.771 }, 00:16:13.771 { 00:16:13.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.771 "dma_device_type": 2 00:16:13.771 } 00:16:13.771 ], 00:16:13.771 "driver_specific": {} 00:16:13.771 } 00:16:13.771 ] 00:16:13.771 18:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:13.771 18:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:13.771 18:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:13.771 18:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:14.339 BaseBdev3 00:16:14.339 18:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:14.339 18:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:14.339 18:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:14.339 18:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:14.339 18:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:14.339 18:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:14.339 18:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:14.598 18:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:14.857 [ 00:16:14.857 { 00:16:14.857 "name": "BaseBdev3", 00:16:14.857 "aliases": [ 00:16:14.857 "f9c1721c-a456-4c1f-8937-45b4e5d52585" 00:16:14.857 ], 00:16:14.857 "product_name": "Malloc disk", 00:16:14.857 "block_size": 512, 00:16:14.857 "num_blocks": 65536, 00:16:14.857 "uuid": "f9c1721c-a456-4c1f-8937-45b4e5d52585", 00:16:14.857 "assigned_rate_limits": { 00:16:14.857 "rw_ios_per_sec": 0, 00:16:14.857 "rw_mbytes_per_sec": 0, 00:16:14.857 "r_mbytes_per_sec": 0, 00:16:14.857 "w_mbytes_per_sec": 0 00:16:14.857 }, 00:16:14.857 "claimed": false, 00:16:14.857 "zoned": false, 00:16:14.857 "supported_io_types": { 00:16:14.857 "read": true, 00:16:14.857 "write": true, 00:16:14.857 "unmap": true, 00:16:14.857 "flush": true, 00:16:14.857 "reset": true, 00:16:14.857 "nvme_admin": false, 00:16:14.857 "nvme_io": false, 00:16:14.857 "nvme_io_md": false, 00:16:14.857 "write_zeroes": true, 00:16:14.857 "zcopy": true, 00:16:14.857 "get_zone_info": false, 00:16:14.857 "zone_management": false, 00:16:14.857 "zone_append": false, 00:16:14.857 "compare": false, 00:16:14.857 "compare_and_write": false, 00:16:14.857 "abort": true, 00:16:14.857 "seek_hole": false, 00:16:14.857 "seek_data": false, 00:16:14.857 "copy": true, 00:16:14.857 "nvme_iov_md": false 00:16:14.857 }, 00:16:14.857 "memory_domains": [ 00:16:14.857 { 00:16:14.857 "dma_device_id": "system", 00:16:14.857 "dma_device_type": 1 00:16:14.857 }, 00:16:14.857 { 00:16:14.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.857 "dma_device_type": 2 00:16:14.857 } 00:16:14.857 ], 00:16:14.857 "driver_specific": {} 00:16:14.857 } 00:16:14.857 ] 00:16:15.116 18:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:15.116 18:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:15.116 18:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:15.116 18:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:15.116 [2024-07-15 18:31:00.664162] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:15.116 [2024-07-15 18:31:00.664200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:15.116 [2024-07-15 18:31:00.664216] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:15.116 [2024-07-15 18:31:00.665600] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:15.375 18:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:15.375 18:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:15.375 18:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:15.375 18:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:15.375 18:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:15.375 18:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:15.375 18:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:15.375 18:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:15.375 18:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:15.375 18:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:15.375 18:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.375 18:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.634 18:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:15.634 "name": "Existed_Raid", 00:16:15.634 "uuid": "c33a5299-a907-49e5-bd7c-2db9e2d96cf9", 00:16:15.634 "strip_size_kb": 0, 00:16:15.634 "state": "configuring", 00:16:15.634 "raid_level": "raid1", 00:16:15.634 "superblock": true, 00:16:15.634 "num_base_bdevs": 3, 00:16:15.634 "num_base_bdevs_discovered": 2, 00:16:15.634 "num_base_bdevs_operational": 3, 00:16:15.634 "base_bdevs_list": [ 00:16:15.634 { 00:16:15.634 "name": "BaseBdev1", 00:16:15.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.634 "is_configured": false, 00:16:15.634 "data_offset": 0, 00:16:15.634 "data_size": 0 00:16:15.634 }, 00:16:15.634 { 00:16:15.634 "name": "BaseBdev2", 00:16:15.634 "uuid": "70fc9f48-d4bb-4366-802b-c96031e3cea1", 00:16:15.634 "is_configured": true, 00:16:15.634 "data_offset": 2048, 00:16:15.634 "data_size": 63488 00:16:15.634 }, 00:16:15.634 { 00:16:15.634 "name": "BaseBdev3", 00:16:15.634 "uuid": "f9c1721c-a456-4c1f-8937-45b4e5d52585", 00:16:15.634 "is_configured": true, 00:16:15.634 "data_offset": 2048, 00:16:15.634 "data_size": 63488 00:16:15.634 } 00:16:15.634 ] 00:16:15.634 }' 00:16:15.634 18:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:15.634 18:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.570 18:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:16.828 [2024-07-15 18:31:02.156189] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:16.828 18:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:16.828 18:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:16.828 18:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:16.828 18:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:16.828 18:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:16.828 18:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:16.828 18:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:16.828 18:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:16.828 18:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:16.828 18:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:16.828 18:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.828 18:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.394 18:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:17.394 "name": "Existed_Raid", 00:16:17.394 "uuid": "c33a5299-a907-49e5-bd7c-2db9e2d96cf9", 00:16:17.394 "strip_size_kb": 0, 00:16:17.394 "state": "configuring", 00:16:17.394 "raid_level": "raid1", 00:16:17.394 "superblock": true, 00:16:17.394 "num_base_bdevs": 3, 00:16:17.395 "num_base_bdevs_discovered": 1, 00:16:17.395 "num_base_bdevs_operational": 3, 00:16:17.395 "base_bdevs_list": [ 00:16:17.395 { 00:16:17.395 "name": "BaseBdev1", 00:16:17.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.395 "is_configured": false, 00:16:17.395 "data_offset": 0, 00:16:17.395 "data_size": 0 00:16:17.395 }, 00:16:17.395 { 00:16:17.395 "name": null, 00:16:17.395 "uuid": "70fc9f48-d4bb-4366-802b-c96031e3cea1", 00:16:17.395 "is_configured": false, 00:16:17.395 "data_offset": 2048, 00:16:17.395 "data_size": 63488 00:16:17.395 }, 00:16:17.395 { 00:16:17.395 "name": "BaseBdev3", 00:16:17.395 "uuid": "f9c1721c-a456-4c1f-8937-45b4e5d52585", 00:16:17.395 "is_configured": true, 00:16:17.395 "data_offset": 2048, 00:16:17.395 "data_size": 63488 00:16:17.395 } 00:16:17.395 ] 00:16:17.395 }' 00:16:17.395 18:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:17.395 18:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.330 18:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.330 18:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:18.330 18:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:18.330 18:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:18.898 [2024-07-15 18:31:04.265141] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:18.898 BaseBdev1 00:16:18.898 18:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:18.898 18:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:18.898 18:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:18.898 18:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:18.898 18:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:18.898 18:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:18.898 18:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:19.156 18:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:19.723 [ 00:16:19.723 { 00:16:19.723 "name": "BaseBdev1", 00:16:19.723 "aliases": [ 00:16:19.723 "268d8854-af52-4585-a1e0-2809516ccb3b" 00:16:19.723 ], 00:16:19.723 "product_name": "Malloc disk", 00:16:19.723 "block_size": 512, 00:16:19.723 "num_blocks": 65536, 00:16:19.723 "uuid": "268d8854-af52-4585-a1e0-2809516ccb3b", 00:16:19.723 "assigned_rate_limits": { 00:16:19.723 "rw_ios_per_sec": 0, 00:16:19.723 "rw_mbytes_per_sec": 0, 00:16:19.723 "r_mbytes_per_sec": 0, 00:16:19.723 "w_mbytes_per_sec": 0 00:16:19.723 }, 00:16:19.723 "claimed": true, 00:16:19.723 "claim_type": "exclusive_write", 00:16:19.724 "zoned": false, 00:16:19.724 "supported_io_types": { 00:16:19.724 "read": true, 00:16:19.724 "write": true, 00:16:19.724 "unmap": true, 00:16:19.724 "flush": true, 00:16:19.724 "reset": true, 00:16:19.724 "nvme_admin": false, 00:16:19.724 "nvme_io": false, 00:16:19.724 "nvme_io_md": false, 00:16:19.724 "write_zeroes": true, 00:16:19.724 "zcopy": true, 00:16:19.724 "get_zone_info": false, 00:16:19.724 "zone_management": false, 00:16:19.724 "zone_append": false, 00:16:19.724 "compare": false, 00:16:19.724 "compare_and_write": false, 00:16:19.724 "abort": true, 00:16:19.724 "seek_hole": false, 00:16:19.724 "seek_data": false, 00:16:19.724 "copy": true, 00:16:19.724 "nvme_iov_md": false 00:16:19.724 }, 00:16:19.724 "memory_domains": [ 00:16:19.724 { 00:16:19.724 "dma_device_id": "system", 00:16:19.724 "dma_device_type": 1 00:16:19.724 }, 00:16:19.724 { 00:16:19.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.724 "dma_device_type": 2 00:16:19.724 } 00:16:19.724 ], 00:16:19.724 "driver_specific": {} 00:16:19.724 } 00:16:19.724 ] 00:16:19.724 18:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:19.724 18:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:19.724 18:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:19.724 18:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:19.724 18:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:19.724 18:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:19.724 18:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:19.724 18:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:19.724 18:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:19.724 18:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:19.724 18:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:19.724 18:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.724 18:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.982 18:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:19.982 "name": "Existed_Raid", 00:16:19.982 "uuid": "c33a5299-a907-49e5-bd7c-2db9e2d96cf9", 00:16:19.982 "strip_size_kb": 0, 00:16:19.982 "state": "configuring", 00:16:19.982 "raid_level": "raid1", 00:16:19.982 "superblock": true, 00:16:19.982 "num_base_bdevs": 3, 00:16:19.982 "num_base_bdevs_discovered": 2, 00:16:19.982 "num_base_bdevs_operational": 3, 00:16:19.982 "base_bdevs_list": [ 00:16:19.982 { 00:16:19.982 "name": "BaseBdev1", 00:16:19.982 "uuid": "268d8854-af52-4585-a1e0-2809516ccb3b", 00:16:19.982 "is_configured": true, 00:16:19.982 "data_offset": 2048, 00:16:19.982 "data_size": 63488 00:16:19.982 }, 00:16:19.982 { 00:16:19.982 "name": null, 00:16:19.982 "uuid": "70fc9f48-d4bb-4366-802b-c96031e3cea1", 00:16:19.982 "is_configured": false, 00:16:19.982 "data_offset": 2048, 00:16:19.982 "data_size": 63488 00:16:19.982 }, 00:16:19.982 { 00:16:19.982 "name": "BaseBdev3", 00:16:19.982 "uuid": "f9c1721c-a456-4c1f-8937-45b4e5d52585", 00:16:19.982 "is_configured": true, 00:16:19.982 "data_offset": 2048, 00:16:19.982 "data_size": 63488 00:16:19.982 } 00:16:19.982 ] 00:16:19.982 }' 00:16:19.982 18:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:19.982 18:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.550 18:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.550 18:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:20.808 18:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:20.808 18:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:21.375 [2024-07-15 18:31:06.627564] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:21.375 18:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:21.375 18:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:21.375 18:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:21.375 18:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:21.375 18:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:21.375 18:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:21.375 18:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:21.375 18:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:21.375 18:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:21.375 18:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:21.375 18:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.375 18:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.375 18:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:21.375 "name": "Existed_Raid", 00:16:21.375 "uuid": "c33a5299-a907-49e5-bd7c-2db9e2d96cf9", 00:16:21.375 "strip_size_kb": 0, 00:16:21.375 "state": "configuring", 00:16:21.375 "raid_level": "raid1", 00:16:21.375 "superblock": true, 00:16:21.375 "num_base_bdevs": 3, 00:16:21.375 "num_base_bdevs_discovered": 1, 00:16:21.375 "num_base_bdevs_operational": 3, 00:16:21.375 "base_bdevs_list": [ 00:16:21.375 { 00:16:21.375 "name": "BaseBdev1", 00:16:21.375 "uuid": "268d8854-af52-4585-a1e0-2809516ccb3b", 00:16:21.375 "is_configured": true, 00:16:21.375 "data_offset": 2048, 00:16:21.375 "data_size": 63488 00:16:21.375 }, 00:16:21.375 { 00:16:21.375 "name": null, 00:16:21.375 "uuid": "70fc9f48-d4bb-4366-802b-c96031e3cea1", 00:16:21.375 "is_configured": false, 00:16:21.375 "data_offset": 2048, 00:16:21.375 "data_size": 63488 00:16:21.375 }, 00:16:21.375 { 00:16:21.375 "name": null, 00:16:21.375 "uuid": "f9c1721c-a456-4c1f-8937-45b4e5d52585", 00:16:21.375 "is_configured": false, 00:16:21.375 "data_offset": 2048, 00:16:21.375 "data_size": 63488 00:16:21.375 } 00:16:21.375 ] 00:16:21.375 }' 00:16:21.375 18:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:21.375 18:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.310 18:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.310 18:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:22.310 18:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:22.310 18:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:22.568 [2024-07-15 18:31:08.007310] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:22.568 18:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:22.568 18:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:22.568 18:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:22.568 18:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:22.568 18:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:22.568 18:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:22.568 18:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:22.568 18:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:22.568 18:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:22.568 18:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:22.568 18:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.568 18:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.911 18:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:22.911 "name": "Existed_Raid", 00:16:22.911 "uuid": "c33a5299-a907-49e5-bd7c-2db9e2d96cf9", 00:16:22.911 "strip_size_kb": 0, 00:16:22.911 "state": "configuring", 00:16:22.911 "raid_level": "raid1", 00:16:22.911 "superblock": true, 00:16:22.911 "num_base_bdevs": 3, 00:16:22.911 "num_base_bdevs_discovered": 2, 00:16:22.911 "num_base_bdevs_operational": 3, 00:16:22.911 "base_bdevs_list": [ 00:16:22.911 { 00:16:22.911 "name": "BaseBdev1", 00:16:22.911 "uuid": "268d8854-af52-4585-a1e0-2809516ccb3b", 00:16:22.911 "is_configured": true, 00:16:22.911 "data_offset": 2048, 00:16:22.911 "data_size": 63488 00:16:22.911 }, 00:16:22.911 { 00:16:22.911 "name": null, 00:16:22.911 "uuid": "70fc9f48-d4bb-4366-802b-c96031e3cea1", 00:16:22.911 "is_configured": false, 00:16:22.911 "data_offset": 2048, 00:16:22.911 "data_size": 63488 00:16:22.911 }, 00:16:22.911 { 00:16:22.911 "name": "BaseBdev3", 00:16:22.911 "uuid": "f9c1721c-a456-4c1f-8937-45b4e5d52585", 00:16:22.911 "is_configured": true, 00:16:22.911 "data_offset": 2048, 00:16:22.911 "data_size": 63488 00:16:22.911 } 00:16:22.911 ] 00:16:22.911 }' 00:16:22.911 18:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:22.911 18:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.477 18:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.477 18:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:23.735 18:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:23.735 18:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:24.300 [2024-07-15 18:31:09.623713] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:24.300 18:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:24.300 18:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:24.300 18:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:24.300 18:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:24.300 18:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:24.300 18:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:24.300 18:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:24.300 18:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:24.300 18:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:24.300 18:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:24.300 18:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.300 18:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.557 18:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:24.557 "name": "Existed_Raid", 00:16:24.557 "uuid": "c33a5299-a907-49e5-bd7c-2db9e2d96cf9", 00:16:24.557 "strip_size_kb": 0, 00:16:24.557 "state": "configuring", 00:16:24.557 "raid_level": "raid1", 00:16:24.557 "superblock": true, 00:16:24.557 "num_base_bdevs": 3, 00:16:24.557 "num_base_bdevs_discovered": 1, 00:16:24.557 "num_base_bdevs_operational": 3, 00:16:24.557 "base_bdevs_list": [ 00:16:24.557 { 00:16:24.557 "name": null, 00:16:24.557 "uuid": "268d8854-af52-4585-a1e0-2809516ccb3b", 00:16:24.557 "is_configured": false, 00:16:24.557 "data_offset": 2048, 00:16:24.557 "data_size": 63488 00:16:24.557 }, 00:16:24.557 { 00:16:24.557 "name": null, 00:16:24.557 "uuid": "70fc9f48-d4bb-4366-802b-c96031e3cea1", 00:16:24.557 "is_configured": false, 00:16:24.557 "data_offset": 2048, 00:16:24.557 "data_size": 63488 00:16:24.557 }, 00:16:24.557 { 00:16:24.557 "name": "BaseBdev3", 00:16:24.557 "uuid": "f9c1721c-a456-4c1f-8937-45b4e5d52585", 00:16:24.557 "is_configured": true, 00:16:24.557 "data_offset": 2048, 00:16:24.557 "data_size": 63488 00:16:24.557 } 00:16:24.557 ] 00:16:24.557 }' 00:16:24.557 18:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:24.557 18:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.122 18:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.122 18:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:25.380 18:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:25.380 18:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:25.639 [2024-07-15 18:31:10.969854] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:25.639 18:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:25.639 18:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:25.639 18:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:25.639 18:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:25.639 18:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:25.639 18:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:25.639 18:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:25.639 18:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:25.639 18:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:25.639 18:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:25.639 18:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.639 18:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.898 18:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:25.898 "name": "Existed_Raid", 00:16:25.898 "uuid": "c33a5299-a907-49e5-bd7c-2db9e2d96cf9", 00:16:25.898 "strip_size_kb": 0, 00:16:25.898 "state": "configuring", 00:16:25.898 "raid_level": "raid1", 00:16:25.898 "superblock": true, 00:16:25.898 "num_base_bdevs": 3, 00:16:25.898 "num_base_bdevs_discovered": 2, 00:16:25.898 "num_base_bdevs_operational": 3, 00:16:25.898 "base_bdevs_list": [ 00:16:25.898 { 00:16:25.898 "name": null, 00:16:25.898 "uuid": "268d8854-af52-4585-a1e0-2809516ccb3b", 00:16:25.898 "is_configured": false, 00:16:25.898 "data_offset": 2048, 00:16:25.898 "data_size": 63488 00:16:25.898 }, 00:16:25.898 { 00:16:25.898 "name": "BaseBdev2", 00:16:25.898 "uuid": "70fc9f48-d4bb-4366-802b-c96031e3cea1", 00:16:25.898 "is_configured": true, 00:16:25.898 "data_offset": 2048, 00:16:25.898 "data_size": 63488 00:16:25.898 }, 00:16:25.898 { 00:16:25.898 "name": "BaseBdev3", 00:16:25.898 "uuid": "f9c1721c-a456-4c1f-8937-45b4e5d52585", 00:16:25.898 "is_configured": true, 00:16:25.898 "data_offset": 2048, 00:16:25.898 "data_size": 63488 00:16:25.898 } 00:16:25.898 ] 00:16:25.898 }' 00:16:25.898 18:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:25.898 18:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.466 18:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:26.466 18:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.724 18:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:26.724 18:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.724 18:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:26.983 18:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 268d8854-af52-4585-a1e0-2809516ccb3b 00:16:27.241 [2024-07-15 18:31:12.649862] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:27.241 [2024-07-15 18:31:12.650018] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x11e0580 00:16:27.241 [2024-07-15 18:31:12.650030] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:27.241 [2024-07-15 18:31:12.650211] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1391db0 00:16:27.241 [2024-07-15 18:31:12.650343] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x11e0580 00:16:27.241 [2024-07-15 18:31:12.650352] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x11e0580 00:16:27.241 [2024-07-15 18:31:12.650448] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.241 NewBaseBdev 00:16:27.241 18:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:27.241 18:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:16:27.241 18:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:27.241 18:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:27.241 18:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:27.241 18:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:27.241 18:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:27.500 18:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:27.758 [ 00:16:27.758 { 00:16:27.758 "name": "NewBaseBdev", 00:16:27.758 "aliases": [ 00:16:27.758 "268d8854-af52-4585-a1e0-2809516ccb3b" 00:16:27.758 ], 00:16:27.758 "product_name": "Malloc disk", 00:16:27.758 "block_size": 512, 00:16:27.758 "num_blocks": 65536, 00:16:27.758 "uuid": "268d8854-af52-4585-a1e0-2809516ccb3b", 00:16:27.758 "assigned_rate_limits": { 00:16:27.758 "rw_ios_per_sec": 0, 00:16:27.758 "rw_mbytes_per_sec": 0, 00:16:27.758 "r_mbytes_per_sec": 0, 00:16:27.758 "w_mbytes_per_sec": 0 00:16:27.758 }, 00:16:27.758 "claimed": true, 00:16:27.758 "claim_type": "exclusive_write", 00:16:27.758 "zoned": false, 00:16:27.758 "supported_io_types": { 00:16:27.758 "read": true, 00:16:27.758 "write": true, 00:16:27.758 "unmap": true, 00:16:27.758 "flush": true, 00:16:27.758 "reset": true, 00:16:27.758 "nvme_admin": false, 00:16:27.758 "nvme_io": false, 00:16:27.758 "nvme_io_md": false, 00:16:27.758 "write_zeroes": true, 00:16:27.758 "zcopy": true, 00:16:27.758 "get_zone_info": false, 00:16:27.758 "zone_management": false, 00:16:27.758 "zone_append": false, 00:16:27.758 "compare": false, 00:16:27.758 "compare_and_write": false, 00:16:27.758 "abort": true, 00:16:27.758 "seek_hole": false, 00:16:27.758 "seek_data": false, 00:16:27.758 "copy": true, 00:16:27.758 "nvme_iov_md": false 00:16:27.758 }, 00:16:27.758 "memory_domains": [ 00:16:27.758 { 00:16:27.758 "dma_device_id": "system", 00:16:27.758 "dma_device_type": 1 00:16:27.758 }, 00:16:27.758 { 00:16:27.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.758 "dma_device_type": 2 00:16:27.758 } 00:16:27.758 ], 00:16:27.758 "driver_specific": {} 00:16:27.758 } 00:16:27.758 ] 00:16:27.758 18:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:27.758 18:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:27.758 18:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:27.758 18:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:27.758 18:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:27.758 18:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:27.758 18:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:27.758 18:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:27.758 18:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:27.758 18:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:27.758 18:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:27.758 18:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.758 18:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.016 18:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:28.016 "name": "Existed_Raid", 00:16:28.016 "uuid": "c33a5299-a907-49e5-bd7c-2db9e2d96cf9", 00:16:28.016 "strip_size_kb": 0, 00:16:28.016 "state": "online", 00:16:28.016 "raid_level": "raid1", 00:16:28.016 "superblock": true, 00:16:28.016 "num_base_bdevs": 3, 00:16:28.016 "num_base_bdevs_discovered": 3, 00:16:28.016 "num_base_bdevs_operational": 3, 00:16:28.016 "base_bdevs_list": [ 00:16:28.016 { 00:16:28.016 "name": "NewBaseBdev", 00:16:28.016 "uuid": "268d8854-af52-4585-a1e0-2809516ccb3b", 00:16:28.016 "is_configured": true, 00:16:28.016 "data_offset": 2048, 00:16:28.016 "data_size": 63488 00:16:28.016 }, 00:16:28.016 { 00:16:28.016 "name": "BaseBdev2", 00:16:28.016 "uuid": "70fc9f48-d4bb-4366-802b-c96031e3cea1", 00:16:28.016 "is_configured": true, 00:16:28.016 "data_offset": 2048, 00:16:28.016 "data_size": 63488 00:16:28.016 }, 00:16:28.016 { 00:16:28.016 "name": "BaseBdev3", 00:16:28.016 "uuid": "f9c1721c-a456-4c1f-8937-45b4e5d52585", 00:16:28.016 "is_configured": true, 00:16:28.016 "data_offset": 2048, 00:16:28.016 "data_size": 63488 00:16:28.016 } 00:16:28.016 ] 00:16:28.016 }' 00:16:28.016 18:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:28.016 18:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.583 18:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:28.583 18:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:28.583 18:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:28.583 18:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:28.583 18:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:28.583 18:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:28.583 18:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:28.583 18:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:28.842 [2024-07-15 18:31:14.278753] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:28.842 18:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:28.842 "name": "Existed_Raid", 00:16:28.842 "aliases": [ 00:16:28.842 "c33a5299-a907-49e5-bd7c-2db9e2d96cf9" 00:16:28.842 ], 00:16:28.842 "product_name": "Raid Volume", 00:16:28.842 "block_size": 512, 00:16:28.842 "num_blocks": 63488, 00:16:28.842 "uuid": "c33a5299-a907-49e5-bd7c-2db9e2d96cf9", 00:16:28.842 "assigned_rate_limits": { 00:16:28.842 "rw_ios_per_sec": 0, 00:16:28.842 "rw_mbytes_per_sec": 0, 00:16:28.842 "r_mbytes_per_sec": 0, 00:16:28.842 "w_mbytes_per_sec": 0 00:16:28.842 }, 00:16:28.842 "claimed": false, 00:16:28.842 "zoned": false, 00:16:28.842 "supported_io_types": { 00:16:28.842 "read": true, 00:16:28.842 "write": true, 00:16:28.842 "unmap": false, 00:16:28.842 "flush": false, 00:16:28.842 "reset": true, 00:16:28.842 "nvme_admin": false, 00:16:28.842 "nvme_io": false, 00:16:28.842 "nvme_io_md": false, 00:16:28.842 "write_zeroes": true, 00:16:28.842 "zcopy": false, 00:16:28.842 "get_zone_info": false, 00:16:28.842 "zone_management": false, 00:16:28.842 "zone_append": false, 00:16:28.842 "compare": false, 00:16:28.842 "compare_and_write": false, 00:16:28.842 "abort": false, 00:16:28.842 "seek_hole": false, 00:16:28.842 "seek_data": false, 00:16:28.842 "copy": false, 00:16:28.842 "nvme_iov_md": false 00:16:28.842 }, 00:16:28.842 "memory_domains": [ 00:16:28.842 { 00:16:28.842 "dma_device_id": "system", 00:16:28.842 "dma_device_type": 1 00:16:28.842 }, 00:16:28.842 { 00:16:28.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.842 "dma_device_type": 2 00:16:28.842 }, 00:16:28.842 { 00:16:28.842 "dma_device_id": "system", 00:16:28.842 "dma_device_type": 1 00:16:28.842 }, 00:16:28.842 { 00:16:28.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.842 "dma_device_type": 2 00:16:28.842 }, 00:16:28.842 { 00:16:28.842 "dma_device_id": "system", 00:16:28.842 "dma_device_type": 1 00:16:28.842 }, 00:16:28.842 { 00:16:28.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.842 "dma_device_type": 2 00:16:28.842 } 00:16:28.842 ], 00:16:28.842 "driver_specific": { 00:16:28.842 "raid": { 00:16:28.842 "uuid": "c33a5299-a907-49e5-bd7c-2db9e2d96cf9", 00:16:28.842 "strip_size_kb": 0, 00:16:28.842 "state": "online", 00:16:28.842 "raid_level": "raid1", 00:16:28.842 "superblock": true, 00:16:28.842 "num_base_bdevs": 3, 00:16:28.842 "num_base_bdevs_discovered": 3, 00:16:28.842 "num_base_bdevs_operational": 3, 00:16:28.842 "base_bdevs_list": [ 00:16:28.842 { 00:16:28.842 "name": "NewBaseBdev", 00:16:28.842 "uuid": "268d8854-af52-4585-a1e0-2809516ccb3b", 00:16:28.842 "is_configured": true, 00:16:28.842 "data_offset": 2048, 00:16:28.842 "data_size": 63488 00:16:28.842 }, 00:16:28.842 { 00:16:28.842 "name": "BaseBdev2", 00:16:28.842 "uuid": "70fc9f48-d4bb-4366-802b-c96031e3cea1", 00:16:28.842 "is_configured": true, 00:16:28.842 "data_offset": 2048, 00:16:28.842 "data_size": 63488 00:16:28.842 }, 00:16:28.842 { 00:16:28.842 "name": "BaseBdev3", 00:16:28.842 "uuid": "f9c1721c-a456-4c1f-8937-45b4e5d52585", 00:16:28.842 "is_configured": true, 00:16:28.842 "data_offset": 2048, 00:16:28.842 "data_size": 63488 00:16:28.842 } 00:16:28.842 ] 00:16:28.842 } 00:16:28.842 } 00:16:28.842 }' 00:16:28.842 18:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:28.842 18:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:28.842 BaseBdev2 00:16:28.842 BaseBdev3' 00:16:28.842 18:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:28.842 18:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:28.842 18:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:29.101 18:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:29.101 "name": "NewBaseBdev", 00:16:29.101 "aliases": [ 00:16:29.101 "268d8854-af52-4585-a1e0-2809516ccb3b" 00:16:29.101 ], 00:16:29.101 "product_name": "Malloc disk", 00:16:29.101 "block_size": 512, 00:16:29.101 "num_blocks": 65536, 00:16:29.101 "uuid": "268d8854-af52-4585-a1e0-2809516ccb3b", 00:16:29.101 "assigned_rate_limits": { 00:16:29.101 "rw_ios_per_sec": 0, 00:16:29.101 "rw_mbytes_per_sec": 0, 00:16:29.101 "r_mbytes_per_sec": 0, 00:16:29.101 "w_mbytes_per_sec": 0 00:16:29.101 }, 00:16:29.101 "claimed": true, 00:16:29.101 "claim_type": "exclusive_write", 00:16:29.101 "zoned": false, 00:16:29.101 "supported_io_types": { 00:16:29.101 "read": true, 00:16:29.101 "write": true, 00:16:29.101 "unmap": true, 00:16:29.101 "flush": true, 00:16:29.101 "reset": true, 00:16:29.101 "nvme_admin": false, 00:16:29.101 "nvme_io": false, 00:16:29.101 "nvme_io_md": false, 00:16:29.101 "write_zeroes": true, 00:16:29.101 "zcopy": true, 00:16:29.101 "get_zone_info": false, 00:16:29.101 "zone_management": false, 00:16:29.101 "zone_append": false, 00:16:29.101 "compare": false, 00:16:29.101 "compare_and_write": false, 00:16:29.101 "abort": true, 00:16:29.101 "seek_hole": false, 00:16:29.101 "seek_data": false, 00:16:29.101 "copy": true, 00:16:29.101 "nvme_iov_md": false 00:16:29.101 }, 00:16:29.101 "memory_domains": [ 00:16:29.101 { 00:16:29.101 "dma_device_id": "system", 00:16:29.101 "dma_device_type": 1 00:16:29.101 }, 00:16:29.101 { 00:16:29.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.101 "dma_device_type": 2 00:16:29.101 } 00:16:29.101 ], 00:16:29.101 "driver_specific": {} 00:16:29.101 }' 00:16:29.101 18:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.101 18:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.360 18:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:29.360 18:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:29.360 18:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:29.360 18:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:29.360 18:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:29.360 18:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:29.360 18:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:29.360 18:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:29.618 18:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:29.618 18:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:29.618 18:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:29.618 18:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:29.618 18:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:29.878 18:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:29.878 "name": "BaseBdev2", 00:16:29.878 "aliases": [ 00:16:29.878 "70fc9f48-d4bb-4366-802b-c96031e3cea1" 00:16:29.878 ], 00:16:29.878 "product_name": "Malloc disk", 00:16:29.878 "block_size": 512, 00:16:29.878 "num_blocks": 65536, 00:16:29.878 "uuid": "70fc9f48-d4bb-4366-802b-c96031e3cea1", 00:16:29.878 "assigned_rate_limits": { 00:16:29.878 "rw_ios_per_sec": 0, 00:16:29.878 "rw_mbytes_per_sec": 0, 00:16:29.878 "r_mbytes_per_sec": 0, 00:16:29.878 "w_mbytes_per_sec": 0 00:16:29.878 }, 00:16:29.878 "claimed": true, 00:16:29.878 "claim_type": "exclusive_write", 00:16:29.878 "zoned": false, 00:16:29.878 "supported_io_types": { 00:16:29.878 "read": true, 00:16:29.878 "write": true, 00:16:29.878 "unmap": true, 00:16:29.878 "flush": true, 00:16:29.878 "reset": true, 00:16:29.878 "nvme_admin": false, 00:16:29.878 "nvme_io": false, 00:16:29.878 "nvme_io_md": false, 00:16:29.878 "write_zeroes": true, 00:16:29.878 "zcopy": true, 00:16:29.878 "get_zone_info": false, 00:16:29.878 "zone_management": false, 00:16:29.878 "zone_append": false, 00:16:29.878 "compare": false, 00:16:29.878 "compare_and_write": false, 00:16:29.878 "abort": true, 00:16:29.878 "seek_hole": false, 00:16:29.878 "seek_data": false, 00:16:29.878 "copy": true, 00:16:29.878 "nvme_iov_md": false 00:16:29.878 }, 00:16:29.878 "memory_domains": [ 00:16:29.878 { 00:16:29.878 "dma_device_id": "system", 00:16:29.878 "dma_device_type": 1 00:16:29.878 }, 00:16:29.878 { 00:16:29.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.878 "dma_device_type": 2 00:16:29.878 } 00:16:29.878 ], 00:16:29.878 "driver_specific": {} 00:16:29.878 }' 00:16:29.878 18:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.878 18:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.878 18:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:29.878 18:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:29.878 18:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:30.137 18:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:30.137 18:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:30.137 18:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:30.137 18:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:30.137 18:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:30.137 18:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:30.137 18:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:30.137 18:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:30.137 18:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:30.137 18:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:30.396 18:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:30.396 "name": "BaseBdev3", 00:16:30.396 "aliases": [ 00:16:30.396 "f9c1721c-a456-4c1f-8937-45b4e5d52585" 00:16:30.396 ], 00:16:30.396 "product_name": "Malloc disk", 00:16:30.396 "block_size": 512, 00:16:30.396 "num_blocks": 65536, 00:16:30.396 "uuid": "f9c1721c-a456-4c1f-8937-45b4e5d52585", 00:16:30.396 "assigned_rate_limits": { 00:16:30.396 "rw_ios_per_sec": 0, 00:16:30.396 "rw_mbytes_per_sec": 0, 00:16:30.396 "r_mbytes_per_sec": 0, 00:16:30.396 "w_mbytes_per_sec": 0 00:16:30.396 }, 00:16:30.396 "claimed": true, 00:16:30.396 "claim_type": "exclusive_write", 00:16:30.396 "zoned": false, 00:16:30.396 "supported_io_types": { 00:16:30.396 "read": true, 00:16:30.396 "write": true, 00:16:30.396 "unmap": true, 00:16:30.396 "flush": true, 00:16:30.396 "reset": true, 00:16:30.396 "nvme_admin": false, 00:16:30.396 "nvme_io": false, 00:16:30.396 "nvme_io_md": false, 00:16:30.396 "write_zeroes": true, 00:16:30.396 "zcopy": true, 00:16:30.396 "get_zone_info": false, 00:16:30.396 "zone_management": false, 00:16:30.396 "zone_append": false, 00:16:30.396 "compare": false, 00:16:30.396 "compare_and_write": false, 00:16:30.396 "abort": true, 00:16:30.396 "seek_hole": false, 00:16:30.396 "seek_data": false, 00:16:30.396 "copy": true, 00:16:30.396 "nvme_iov_md": false 00:16:30.396 }, 00:16:30.396 "memory_domains": [ 00:16:30.396 { 00:16:30.396 "dma_device_id": "system", 00:16:30.396 "dma_device_type": 1 00:16:30.396 }, 00:16:30.396 { 00:16:30.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.396 "dma_device_type": 2 00:16:30.396 } 00:16:30.396 ], 00:16:30.396 "driver_specific": {} 00:16:30.396 }' 00:16:30.396 18:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:30.396 18:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:30.655 18:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:30.655 18:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:30.655 18:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:30.655 18:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:30.655 18:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:30.655 18:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:30.655 18:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:30.655 18:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:30.914 18:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:30.914 18:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:30.914 18:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:31.173 [2024-07-15 18:31:16.488414] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:31.173 [2024-07-15 18:31:16.488437] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:31.173 [2024-07-15 18:31:16.488482] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:31.173 [2024-07-15 18:31:16.488752] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:31.173 [2024-07-15 18:31:16.488761] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11e0580 name Existed_Raid, state offline 00:16:31.173 18:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2819165 00:16:31.173 18:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2819165 ']' 00:16:31.173 18:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2819165 00:16:31.173 18:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:16:31.173 18:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:31.173 18:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2819165 00:16:31.173 18:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:31.173 18:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:31.173 18:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2819165' 00:16:31.173 killing process with pid 2819165 00:16:31.173 18:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2819165 00:16:31.173 [2024-07-15 18:31:16.558071] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:31.173 18:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2819165 00:16:31.173 [2024-07-15 18:31:16.584164] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:31.433 18:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:16:31.433 00:16:31.433 real 0m33.551s 00:16:31.433 user 1m3.724s 00:16:31.433 sys 0m4.402s 00:16:31.433 18:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:31.433 18:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.433 ************************************ 00:16:31.433 END TEST raid_state_function_test_sb 00:16:31.433 ************************************ 00:16:31.433 18:31:16 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:31.433 18:31:16 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:16:31.433 18:31:16 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:31.433 18:31:16 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:31.433 18:31:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:31.433 ************************************ 00:16:31.433 START TEST raid_superblock_test 00:16:31.433 ************************************ 00:16:31.433 18:31:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 3 00:16:31.433 18:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:16:31.433 18:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:16:31.433 18:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:16:31.433 18:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:16:31.433 18:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:16:31.433 18:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:16:31.433 18:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:16:31.433 18:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:16:31.433 18:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:16:31.433 18:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:16:31.433 18:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:16:31.433 18:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:16:31.433 18:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:16:31.433 18:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:16:31.433 18:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:16:31.433 18:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2824988 00:16:31.433 18:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2824988 /var/tmp/spdk-raid.sock 00:16:31.433 18:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:31.433 18:31:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2824988 ']' 00:16:31.433 18:31:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:31.433 18:31:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:31.433 18:31:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:31.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:31.433 18:31:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:31.433 18:31:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.433 [2024-07-15 18:31:16.884759] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:16:31.433 [2024-07-15 18:31:16.884821] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2824988 ] 00:16:31.692 [2024-07-15 18:31:16.987305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.692 [2024-07-15 18:31:17.078152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.692 [2024-07-15 18:31:17.137242] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:31.692 [2024-07-15 18:31:17.137277] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:32.629 18:31:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:32.629 18:31:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:16:32.629 18:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:16:32.629 18:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:32.629 18:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:16:32.629 18:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:16:32.629 18:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:32.629 18:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:32.629 18:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:32.629 18:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:32.629 18:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:32.629 malloc1 00:16:32.629 18:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:32.888 [2024-07-15 18:31:18.335382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:32.888 [2024-07-15 18:31:18.335431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.888 [2024-07-15 18:31:18.335448] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c19e20 00:16:32.888 [2024-07-15 18:31:18.335458] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.888 [2024-07-15 18:31:18.337181] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.888 [2024-07-15 18:31:18.337210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:32.888 pt1 00:16:32.888 18:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:32.888 18:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:32.888 18:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:16:32.888 18:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:16:32.888 18:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:32.888 18:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:32.888 18:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:32.888 18:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:32.888 18:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:33.147 malloc2 00:16:33.147 18:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:33.406 [2024-07-15 18:31:18.853425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:33.406 [2024-07-15 18:31:18.853467] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.406 [2024-07-15 18:31:18.853481] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1dc3ed0 00:16:33.406 [2024-07-15 18:31:18.853491] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.406 [2024-07-15 18:31:18.854946] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.406 [2024-07-15 18:31:18.854981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:33.406 pt2 00:16:33.406 18:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:33.406 18:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:33.406 18:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:16:33.406 18:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:16:33.406 18:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:33.406 18:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:33.406 18:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:33.406 18:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:33.406 18:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:33.665 malloc3 00:16:33.665 18:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:33.923 [2024-07-15 18:31:19.375318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:33.923 [2024-07-15 18:31:19.375358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.923 [2024-07-15 18:31:19.375373] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1dc7a30 00:16:33.923 [2024-07-15 18:31:19.375382] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.923 [2024-07-15 18:31:19.376862] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.923 [2024-07-15 18:31:19.376890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:33.923 pt3 00:16:33.923 18:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:33.923 18:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:33.923 18:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:16:34.182 [2024-07-15 18:31:19.636038] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:34.182 [2024-07-15 18:31:19.637409] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:34.182 [2024-07-15 18:31:19.637466] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:34.182 [2024-07-15 18:31:19.637620] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1dc8a40 00:16:34.182 [2024-07-15 18:31:19.637630] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:34.182 [2024-07-15 18:31:19.637829] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1dc3050 00:16:34.182 [2024-07-15 18:31:19.637993] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1dc8a40 00:16:34.182 [2024-07-15 18:31:19.638002] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1dc8a40 00:16:34.182 [2024-07-15 18:31:19.638108] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:34.182 18:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:34.182 18:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:34.182 18:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:34.182 18:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:34.182 18:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:34.182 18:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:34.182 18:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:34.182 18:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:34.182 18:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:34.182 18:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:34.182 18:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.182 18:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.441 18:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:34.441 "name": "raid_bdev1", 00:16:34.441 "uuid": "f748f8e8-bbba-4b81-ae31-f06828a75319", 00:16:34.441 "strip_size_kb": 0, 00:16:34.441 "state": "online", 00:16:34.441 "raid_level": "raid1", 00:16:34.441 "superblock": true, 00:16:34.441 "num_base_bdevs": 3, 00:16:34.441 "num_base_bdevs_discovered": 3, 00:16:34.441 "num_base_bdevs_operational": 3, 00:16:34.441 "base_bdevs_list": [ 00:16:34.441 { 00:16:34.441 "name": "pt1", 00:16:34.441 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:34.441 "is_configured": true, 00:16:34.441 "data_offset": 2048, 00:16:34.441 "data_size": 63488 00:16:34.441 }, 00:16:34.441 { 00:16:34.441 "name": "pt2", 00:16:34.441 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:34.441 "is_configured": true, 00:16:34.441 "data_offset": 2048, 00:16:34.441 "data_size": 63488 00:16:34.441 }, 00:16:34.441 { 00:16:34.441 "name": "pt3", 00:16:34.442 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:34.442 "is_configured": true, 00:16:34.442 "data_offset": 2048, 00:16:34.442 "data_size": 63488 00:16:34.442 } 00:16:34.442 ] 00:16:34.442 }' 00:16:34.442 18:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:34.442 18:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.009 18:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:16:35.009 18:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:35.009 18:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:35.009 18:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:35.009 18:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:35.009 18:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:35.009 18:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:35.009 18:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:35.268 [2024-07-15 18:31:20.779403] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:35.268 18:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:35.268 "name": "raid_bdev1", 00:16:35.268 "aliases": [ 00:16:35.268 "f748f8e8-bbba-4b81-ae31-f06828a75319" 00:16:35.268 ], 00:16:35.268 "product_name": "Raid Volume", 00:16:35.268 "block_size": 512, 00:16:35.268 "num_blocks": 63488, 00:16:35.268 "uuid": "f748f8e8-bbba-4b81-ae31-f06828a75319", 00:16:35.268 "assigned_rate_limits": { 00:16:35.268 "rw_ios_per_sec": 0, 00:16:35.268 "rw_mbytes_per_sec": 0, 00:16:35.268 "r_mbytes_per_sec": 0, 00:16:35.268 "w_mbytes_per_sec": 0 00:16:35.268 }, 00:16:35.268 "claimed": false, 00:16:35.268 "zoned": false, 00:16:35.268 "supported_io_types": { 00:16:35.268 "read": true, 00:16:35.268 "write": true, 00:16:35.268 "unmap": false, 00:16:35.269 "flush": false, 00:16:35.269 "reset": true, 00:16:35.269 "nvme_admin": false, 00:16:35.269 "nvme_io": false, 00:16:35.269 "nvme_io_md": false, 00:16:35.269 "write_zeroes": true, 00:16:35.269 "zcopy": false, 00:16:35.269 "get_zone_info": false, 00:16:35.269 "zone_management": false, 00:16:35.269 "zone_append": false, 00:16:35.269 "compare": false, 00:16:35.269 "compare_and_write": false, 00:16:35.269 "abort": false, 00:16:35.269 "seek_hole": false, 00:16:35.269 "seek_data": false, 00:16:35.269 "copy": false, 00:16:35.269 "nvme_iov_md": false 00:16:35.269 }, 00:16:35.269 "memory_domains": [ 00:16:35.269 { 00:16:35.269 "dma_device_id": "system", 00:16:35.269 "dma_device_type": 1 00:16:35.269 }, 00:16:35.269 { 00:16:35.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.269 "dma_device_type": 2 00:16:35.269 }, 00:16:35.269 { 00:16:35.269 "dma_device_id": "system", 00:16:35.269 "dma_device_type": 1 00:16:35.269 }, 00:16:35.269 { 00:16:35.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.269 "dma_device_type": 2 00:16:35.269 }, 00:16:35.269 { 00:16:35.269 "dma_device_id": "system", 00:16:35.269 "dma_device_type": 1 00:16:35.269 }, 00:16:35.269 { 00:16:35.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.269 "dma_device_type": 2 00:16:35.269 } 00:16:35.269 ], 00:16:35.269 "driver_specific": { 00:16:35.269 "raid": { 00:16:35.269 "uuid": "f748f8e8-bbba-4b81-ae31-f06828a75319", 00:16:35.269 "strip_size_kb": 0, 00:16:35.269 "state": "online", 00:16:35.269 "raid_level": "raid1", 00:16:35.269 "superblock": true, 00:16:35.269 "num_base_bdevs": 3, 00:16:35.269 "num_base_bdevs_discovered": 3, 00:16:35.269 "num_base_bdevs_operational": 3, 00:16:35.269 "base_bdevs_list": [ 00:16:35.269 { 00:16:35.269 "name": "pt1", 00:16:35.269 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:35.269 "is_configured": true, 00:16:35.269 "data_offset": 2048, 00:16:35.269 "data_size": 63488 00:16:35.269 }, 00:16:35.269 { 00:16:35.269 "name": "pt2", 00:16:35.269 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:35.269 "is_configured": true, 00:16:35.269 "data_offset": 2048, 00:16:35.269 "data_size": 63488 00:16:35.269 }, 00:16:35.269 { 00:16:35.269 "name": "pt3", 00:16:35.269 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:35.269 "is_configured": true, 00:16:35.269 "data_offset": 2048, 00:16:35.269 "data_size": 63488 00:16:35.269 } 00:16:35.269 ] 00:16:35.269 } 00:16:35.269 } 00:16:35.269 }' 00:16:35.269 18:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:35.528 18:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:35.528 pt2 00:16:35.528 pt3' 00:16:35.528 18:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:35.528 18:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:35.528 18:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:35.787 18:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:35.787 "name": "pt1", 00:16:35.787 "aliases": [ 00:16:35.787 "00000000-0000-0000-0000-000000000001" 00:16:35.787 ], 00:16:35.787 "product_name": "passthru", 00:16:35.787 "block_size": 512, 00:16:35.787 "num_blocks": 65536, 00:16:35.787 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:35.787 "assigned_rate_limits": { 00:16:35.787 "rw_ios_per_sec": 0, 00:16:35.787 "rw_mbytes_per_sec": 0, 00:16:35.787 "r_mbytes_per_sec": 0, 00:16:35.787 "w_mbytes_per_sec": 0 00:16:35.787 }, 00:16:35.787 "claimed": true, 00:16:35.787 "claim_type": "exclusive_write", 00:16:35.787 "zoned": false, 00:16:35.787 "supported_io_types": { 00:16:35.787 "read": true, 00:16:35.787 "write": true, 00:16:35.787 "unmap": true, 00:16:35.787 "flush": true, 00:16:35.787 "reset": true, 00:16:35.787 "nvme_admin": false, 00:16:35.787 "nvme_io": false, 00:16:35.787 "nvme_io_md": false, 00:16:35.787 "write_zeroes": true, 00:16:35.787 "zcopy": true, 00:16:35.787 "get_zone_info": false, 00:16:35.787 "zone_management": false, 00:16:35.787 "zone_append": false, 00:16:35.787 "compare": false, 00:16:35.787 "compare_and_write": false, 00:16:35.787 "abort": true, 00:16:35.787 "seek_hole": false, 00:16:35.787 "seek_data": false, 00:16:35.787 "copy": true, 00:16:35.787 "nvme_iov_md": false 00:16:35.787 }, 00:16:35.787 "memory_domains": [ 00:16:35.787 { 00:16:35.787 "dma_device_id": "system", 00:16:35.787 "dma_device_type": 1 00:16:35.787 }, 00:16:35.787 { 00:16:35.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.787 "dma_device_type": 2 00:16:35.787 } 00:16:35.787 ], 00:16:35.787 "driver_specific": { 00:16:35.787 "passthru": { 00:16:35.787 "name": "pt1", 00:16:35.787 "base_bdev_name": "malloc1" 00:16:35.787 } 00:16:35.787 } 00:16:35.787 }' 00:16:35.787 18:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:35.787 18:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:35.787 18:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:35.787 18:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:35.787 18:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:35.787 18:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:35.787 18:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:36.046 18:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:36.046 18:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:36.046 18:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:36.046 18:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:36.046 18:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:36.046 18:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:36.046 18:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:36.046 18:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:36.310 18:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:36.310 "name": "pt2", 00:16:36.310 "aliases": [ 00:16:36.310 "00000000-0000-0000-0000-000000000002" 00:16:36.310 ], 00:16:36.310 "product_name": "passthru", 00:16:36.310 "block_size": 512, 00:16:36.310 "num_blocks": 65536, 00:16:36.310 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:36.310 "assigned_rate_limits": { 00:16:36.310 "rw_ios_per_sec": 0, 00:16:36.310 "rw_mbytes_per_sec": 0, 00:16:36.310 "r_mbytes_per_sec": 0, 00:16:36.310 "w_mbytes_per_sec": 0 00:16:36.310 }, 00:16:36.310 "claimed": true, 00:16:36.310 "claim_type": "exclusive_write", 00:16:36.310 "zoned": false, 00:16:36.310 "supported_io_types": { 00:16:36.310 "read": true, 00:16:36.310 "write": true, 00:16:36.310 "unmap": true, 00:16:36.310 "flush": true, 00:16:36.310 "reset": true, 00:16:36.310 "nvme_admin": false, 00:16:36.310 "nvme_io": false, 00:16:36.310 "nvme_io_md": false, 00:16:36.310 "write_zeroes": true, 00:16:36.310 "zcopy": true, 00:16:36.310 "get_zone_info": false, 00:16:36.310 "zone_management": false, 00:16:36.310 "zone_append": false, 00:16:36.310 "compare": false, 00:16:36.310 "compare_and_write": false, 00:16:36.310 "abort": true, 00:16:36.310 "seek_hole": false, 00:16:36.310 "seek_data": false, 00:16:36.310 "copy": true, 00:16:36.310 "nvme_iov_md": false 00:16:36.310 }, 00:16:36.310 "memory_domains": [ 00:16:36.310 { 00:16:36.310 "dma_device_id": "system", 00:16:36.310 "dma_device_type": 1 00:16:36.310 }, 00:16:36.310 { 00:16:36.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.310 "dma_device_type": 2 00:16:36.310 } 00:16:36.310 ], 00:16:36.310 "driver_specific": { 00:16:36.310 "passthru": { 00:16:36.310 "name": "pt2", 00:16:36.310 "base_bdev_name": "malloc2" 00:16:36.310 } 00:16:36.310 } 00:16:36.310 }' 00:16:36.310 18:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:36.310 18:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:36.310 18:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:36.310 18:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:36.569 18:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:36.569 18:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:36.569 18:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:36.569 18:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:36.569 18:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:36.569 18:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:36.569 18:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:36.569 18:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:36.569 18:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:36.569 18:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:36.569 18:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:36.828 18:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:36.828 "name": "pt3", 00:16:36.828 "aliases": [ 00:16:36.828 "00000000-0000-0000-0000-000000000003" 00:16:36.828 ], 00:16:36.828 "product_name": "passthru", 00:16:36.828 "block_size": 512, 00:16:36.828 "num_blocks": 65536, 00:16:36.828 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:36.828 "assigned_rate_limits": { 00:16:36.828 "rw_ios_per_sec": 0, 00:16:36.828 "rw_mbytes_per_sec": 0, 00:16:36.828 "r_mbytes_per_sec": 0, 00:16:36.828 "w_mbytes_per_sec": 0 00:16:36.828 }, 00:16:36.828 "claimed": true, 00:16:36.828 "claim_type": "exclusive_write", 00:16:36.828 "zoned": false, 00:16:36.828 "supported_io_types": { 00:16:36.828 "read": true, 00:16:36.828 "write": true, 00:16:36.828 "unmap": true, 00:16:36.828 "flush": true, 00:16:36.828 "reset": true, 00:16:36.828 "nvme_admin": false, 00:16:36.828 "nvme_io": false, 00:16:36.828 "nvme_io_md": false, 00:16:36.828 "write_zeroes": true, 00:16:36.828 "zcopy": true, 00:16:36.828 "get_zone_info": false, 00:16:36.828 "zone_management": false, 00:16:36.828 "zone_append": false, 00:16:36.828 "compare": false, 00:16:36.828 "compare_and_write": false, 00:16:36.828 "abort": true, 00:16:36.828 "seek_hole": false, 00:16:36.828 "seek_data": false, 00:16:36.828 "copy": true, 00:16:36.828 "nvme_iov_md": false 00:16:36.828 }, 00:16:36.828 "memory_domains": [ 00:16:36.828 { 00:16:36.828 "dma_device_id": "system", 00:16:36.828 "dma_device_type": 1 00:16:36.828 }, 00:16:36.828 { 00:16:36.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.828 "dma_device_type": 2 00:16:36.828 } 00:16:36.828 ], 00:16:36.828 "driver_specific": { 00:16:36.828 "passthru": { 00:16:36.828 "name": "pt3", 00:16:36.828 "base_bdev_name": "malloc3" 00:16:36.828 } 00:16:36.828 } 00:16:36.828 }' 00:16:36.828 18:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:37.115 18:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:37.115 18:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:37.115 18:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:37.115 18:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:37.115 18:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:37.115 18:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:37.115 18:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:37.115 18:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:37.115 18:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:37.373 18:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:37.373 18:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:37.373 18:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:37.373 18:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:16:37.631 [2024-07-15 18:31:22.957263] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:37.631 18:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=f748f8e8-bbba-4b81-ae31-f06828a75319 00:16:37.631 18:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z f748f8e8-bbba-4b81-ae31-f06828a75319 ']' 00:16:37.631 18:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:37.889 [2024-07-15 18:31:23.217659] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:37.889 [2024-07-15 18:31:23.217680] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:37.889 [2024-07-15 18:31:23.217728] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:37.889 [2024-07-15 18:31:23.217793] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:37.889 [2024-07-15 18:31:23.217802] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1dc8a40 name raid_bdev1, state offline 00:16:37.889 18:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.889 18:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:16:38.146 18:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:16:38.146 18:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:16:38.146 18:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:38.146 18:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:38.405 18:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:38.405 18:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:38.664 18:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:38.664 18:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:38.922 18:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:38.922 18:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:39.180 18:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:16:39.180 18:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:39.180 18:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:16:39.180 18:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:39.180 18:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:39.180 18:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:39.180 18:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:39.180 18:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:39.180 18:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:39.180 18:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:39.180 18:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:39.180 18:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:16:39.180 18:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:39.180 [2024-07-15 18:31:24.729655] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:39.180 [2024-07-15 18:31:24.731079] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:39.180 [2024-07-15 18:31:24.731123] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:39.180 [2024-07-15 18:31:24.731169] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:39.180 [2024-07-15 18:31:24.731206] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:39.180 [2024-07-15 18:31:24.731226] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:39.180 [2024-07-15 18:31:24.731241] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:39.180 [2024-07-15 18:31:24.731249] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1dc4100 name raid_bdev1, state configuring 00:16:39.439 request: 00:16:39.439 { 00:16:39.439 "name": "raid_bdev1", 00:16:39.439 "raid_level": "raid1", 00:16:39.439 "base_bdevs": [ 00:16:39.439 "malloc1", 00:16:39.439 "malloc2", 00:16:39.439 "malloc3" 00:16:39.439 ], 00:16:39.439 "superblock": false, 00:16:39.439 "method": "bdev_raid_create", 00:16:39.439 "req_id": 1 00:16:39.439 } 00:16:39.439 Got JSON-RPC error response 00:16:39.439 response: 00:16:39.439 { 00:16:39.439 "code": -17, 00:16:39.439 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:39.439 } 00:16:39.439 18:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:16:39.439 18:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:39.439 18:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:39.439 18:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:39.439 18:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.439 18:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:16:39.697 18:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:16:39.697 18:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:16:39.697 18:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:39.697 [2024-07-15 18:31:25.242976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:39.697 [2024-07-15 18:31:25.243021] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.697 [2024-07-15 18:31:25.243037] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1dc4570 00:16:39.697 [2024-07-15 18:31:25.243046] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.697 [2024-07-15 18:31:25.244726] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.697 [2024-07-15 18:31:25.244754] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:39.697 [2024-07-15 18:31:25.244816] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:39.697 [2024-07-15 18:31:25.244840] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:39.697 pt1 00:16:39.955 18:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:39.955 18:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:39.955 18:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:39.955 18:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:39.955 18:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:39.955 18:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:39.955 18:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:39.955 18:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:39.955 18:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:39.955 18:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:39.955 18:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.955 18:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.212 18:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:40.212 "name": "raid_bdev1", 00:16:40.212 "uuid": "f748f8e8-bbba-4b81-ae31-f06828a75319", 00:16:40.212 "strip_size_kb": 0, 00:16:40.212 "state": "configuring", 00:16:40.212 "raid_level": "raid1", 00:16:40.212 "superblock": true, 00:16:40.212 "num_base_bdevs": 3, 00:16:40.212 "num_base_bdevs_discovered": 1, 00:16:40.212 "num_base_bdevs_operational": 3, 00:16:40.212 "base_bdevs_list": [ 00:16:40.212 { 00:16:40.212 "name": "pt1", 00:16:40.212 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:40.212 "is_configured": true, 00:16:40.212 "data_offset": 2048, 00:16:40.212 "data_size": 63488 00:16:40.212 }, 00:16:40.212 { 00:16:40.212 "name": null, 00:16:40.212 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:40.213 "is_configured": false, 00:16:40.213 "data_offset": 2048, 00:16:40.213 "data_size": 63488 00:16:40.213 }, 00:16:40.213 { 00:16:40.213 "name": null, 00:16:40.213 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:40.213 "is_configured": false, 00:16:40.213 "data_offset": 2048, 00:16:40.213 "data_size": 63488 00:16:40.213 } 00:16:40.213 ] 00:16:40.213 }' 00:16:40.213 18:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:40.213 18:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.777 18:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:16:40.777 18:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:41.035 [2024-07-15 18:31:26.374025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:41.035 [2024-07-15 18:31:26.374073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.035 [2024-07-15 18:31:26.374089] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1dc52e0 00:16:41.035 [2024-07-15 18:31:26.374098] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.035 [2024-07-15 18:31:26.374446] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.035 [2024-07-15 18:31:26.374463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:41.035 [2024-07-15 18:31:26.374522] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:41.035 [2024-07-15 18:31:26.374539] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:41.035 pt2 00:16:41.035 18:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:41.293 [2024-07-15 18:31:26.630714] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:41.293 18:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:41.293 18:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:41.293 18:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:41.293 18:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:41.293 18:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:41.293 18:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:41.293 18:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:41.293 18:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:41.293 18:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:41.293 18:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:41.293 18:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.293 18:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.551 18:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:41.551 "name": "raid_bdev1", 00:16:41.551 "uuid": "f748f8e8-bbba-4b81-ae31-f06828a75319", 00:16:41.551 "strip_size_kb": 0, 00:16:41.551 "state": "configuring", 00:16:41.551 "raid_level": "raid1", 00:16:41.551 "superblock": true, 00:16:41.551 "num_base_bdevs": 3, 00:16:41.551 "num_base_bdevs_discovered": 1, 00:16:41.551 "num_base_bdevs_operational": 3, 00:16:41.551 "base_bdevs_list": [ 00:16:41.551 { 00:16:41.551 "name": "pt1", 00:16:41.551 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:41.551 "is_configured": true, 00:16:41.551 "data_offset": 2048, 00:16:41.551 "data_size": 63488 00:16:41.551 }, 00:16:41.551 { 00:16:41.551 "name": null, 00:16:41.551 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:41.551 "is_configured": false, 00:16:41.551 "data_offset": 2048, 00:16:41.551 "data_size": 63488 00:16:41.551 }, 00:16:41.551 { 00:16:41.551 "name": null, 00:16:41.551 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:41.551 "is_configured": false, 00:16:41.552 "data_offset": 2048, 00:16:41.552 "data_size": 63488 00:16:41.552 } 00:16:41.552 ] 00:16:41.552 }' 00:16:41.552 18:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:41.552 18:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.117 18:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:16:42.117 18:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:42.117 18:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:42.374 [2024-07-15 18:31:27.765772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:42.374 [2024-07-15 18:31:27.765828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.374 [2024-07-15 18:31:27.765846] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c1aa40 00:16:42.374 [2024-07-15 18:31:27.765855] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.374 [2024-07-15 18:31:27.766210] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.374 [2024-07-15 18:31:27.766228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:42.374 [2024-07-15 18:31:27.766288] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:42.374 [2024-07-15 18:31:27.766307] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:42.374 pt2 00:16:42.374 18:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:42.374 18:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:42.374 18:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:42.632 [2024-07-15 18:31:27.954285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:42.632 [2024-07-15 18:31:27.954324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.632 [2024-07-15 18:31:27.954338] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1dc9400 00:16:42.632 [2024-07-15 18:31:27.954347] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.632 [2024-07-15 18:31:27.954668] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.632 [2024-07-15 18:31:27.954683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:42.632 [2024-07-15 18:31:27.954740] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:42.632 [2024-07-15 18:31:27.954757] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:42.632 [2024-07-15 18:31:27.954870] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1dc5e00 00:16:42.632 [2024-07-15 18:31:27.954879] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:42.632 [2024-07-15 18:31:27.955073] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1dc80e0 00:16:42.632 [2024-07-15 18:31:27.955209] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1dc5e00 00:16:42.632 [2024-07-15 18:31:27.955217] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1dc5e00 00:16:42.632 [2024-07-15 18:31:27.955318] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.632 pt3 00:16:42.632 18:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:42.632 18:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:42.632 18:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:42.632 18:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:42.632 18:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:42.632 18:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:42.632 18:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:42.632 18:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:42.632 18:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:42.632 18:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:42.632 18:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:42.632 18:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:42.633 18:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.633 18:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.890 18:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:42.890 "name": "raid_bdev1", 00:16:42.890 "uuid": "f748f8e8-bbba-4b81-ae31-f06828a75319", 00:16:42.890 "strip_size_kb": 0, 00:16:42.890 "state": "online", 00:16:42.890 "raid_level": "raid1", 00:16:42.890 "superblock": true, 00:16:42.890 "num_base_bdevs": 3, 00:16:42.890 "num_base_bdevs_discovered": 3, 00:16:42.890 "num_base_bdevs_operational": 3, 00:16:42.890 "base_bdevs_list": [ 00:16:42.890 { 00:16:42.890 "name": "pt1", 00:16:42.890 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:42.890 "is_configured": true, 00:16:42.890 "data_offset": 2048, 00:16:42.890 "data_size": 63488 00:16:42.890 }, 00:16:42.890 { 00:16:42.890 "name": "pt2", 00:16:42.890 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:42.890 "is_configured": true, 00:16:42.890 "data_offset": 2048, 00:16:42.891 "data_size": 63488 00:16:42.891 }, 00:16:42.891 { 00:16:42.891 "name": "pt3", 00:16:42.891 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:42.891 "is_configured": true, 00:16:42.891 "data_offset": 2048, 00:16:42.891 "data_size": 63488 00:16:42.891 } 00:16:42.891 ] 00:16:42.891 }' 00:16:42.891 18:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:42.891 18:31:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.457 18:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:16:43.457 18:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:43.457 18:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:43.457 18:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:43.457 18:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:43.457 18:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:43.457 18:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:43.457 18:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:43.715 [2024-07-15 18:31:29.101669] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:43.715 18:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:43.715 "name": "raid_bdev1", 00:16:43.715 "aliases": [ 00:16:43.715 "f748f8e8-bbba-4b81-ae31-f06828a75319" 00:16:43.715 ], 00:16:43.715 "product_name": "Raid Volume", 00:16:43.715 "block_size": 512, 00:16:43.715 "num_blocks": 63488, 00:16:43.715 "uuid": "f748f8e8-bbba-4b81-ae31-f06828a75319", 00:16:43.715 "assigned_rate_limits": { 00:16:43.715 "rw_ios_per_sec": 0, 00:16:43.715 "rw_mbytes_per_sec": 0, 00:16:43.715 "r_mbytes_per_sec": 0, 00:16:43.715 "w_mbytes_per_sec": 0 00:16:43.715 }, 00:16:43.715 "claimed": false, 00:16:43.715 "zoned": false, 00:16:43.715 "supported_io_types": { 00:16:43.715 "read": true, 00:16:43.715 "write": true, 00:16:43.715 "unmap": false, 00:16:43.715 "flush": false, 00:16:43.715 "reset": true, 00:16:43.715 "nvme_admin": false, 00:16:43.715 "nvme_io": false, 00:16:43.715 "nvme_io_md": false, 00:16:43.715 "write_zeroes": true, 00:16:43.715 "zcopy": false, 00:16:43.715 "get_zone_info": false, 00:16:43.715 "zone_management": false, 00:16:43.715 "zone_append": false, 00:16:43.715 "compare": false, 00:16:43.715 "compare_and_write": false, 00:16:43.715 "abort": false, 00:16:43.715 "seek_hole": false, 00:16:43.715 "seek_data": false, 00:16:43.715 "copy": false, 00:16:43.715 "nvme_iov_md": false 00:16:43.715 }, 00:16:43.715 "memory_domains": [ 00:16:43.715 { 00:16:43.715 "dma_device_id": "system", 00:16:43.715 "dma_device_type": 1 00:16:43.715 }, 00:16:43.715 { 00:16:43.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.715 "dma_device_type": 2 00:16:43.715 }, 00:16:43.715 { 00:16:43.715 "dma_device_id": "system", 00:16:43.715 "dma_device_type": 1 00:16:43.715 }, 00:16:43.715 { 00:16:43.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.715 "dma_device_type": 2 00:16:43.715 }, 00:16:43.715 { 00:16:43.715 "dma_device_id": "system", 00:16:43.715 "dma_device_type": 1 00:16:43.715 }, 00:16:43.715 { 00:16:43.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.715 "dma_device_type": 2 00:16:43.715 } 00:16:43.715 ], 00:16:43.715 "driver_specific": { 00:16:43.715 "raid": { 00:16:43.715 "uuid": "f748f8e8-bbba-4b81-ae31-f06828a75319", 00:16:43.715 "strip_size_kb": 0, 00:16:43.715 "state": "online", 00:16:43.715 "raid_level": "raid1", 00:16:43.715 "superblock": true, 00:16:43.715 "num_base_bdevs": 3, 00:16:43.715 "num_base_bdevs_discovered": 3, 00:16:43.715 "num_base_bdevs_operational": 3, 00:16:43.715 "base_bdevs_list": [ 00:16:43.715 { 00:16:43.715 "name": "pt1", 00:16:43.715 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:43.715 "is_configured": true, 00:16:43.715 "data_offset": 2048, 00:16:43.715 "data_size": 63488 00:16:43.715 }, 00:16:43.715 { 00:16:43.715 "name": "pt2", 00:16:43.715 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:43.715 "is_configured": true, 00:16:43.715 "data_offset": 2048, 00:16:43.715 "data_size": 63488 00:16:43.715 }, 00:16:43.715 { 00:16:43.716 "name": "pt3", 00:16:43.716 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:43.716 "is_configured": true, 00:16:43.716 "data_offset": 2048, 00:16:43.716 "data_size": 63488 00:16:43.716 } 00:16:43.716 ] 00:16:43.716 } 00:16:43.716 } 00:16:43.716 }' 00:16:43.716 18:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:43.716 18:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:43.716 pt2 00:16:43.716 pt3' 00:16:43.716 18:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:43.716 18:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:43.716 18:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:43.974 18:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:43.974 "name": "pt1", 00:16:43.974 "aliases": [ 00:16:43.974 "00000000-0000-0000-0000-000000000001" 00:16:43.974 ], 00:16:43.974 "product_name": "passthru", 00:16:43.974 "block_size": 512, 00:16:43.974 "num_blocks": 65536, 00:16:43.974 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:43.974 "assigned_rate_limits": { 00:16:43.974 "rw_ios_per_sec": 0, 00:16:43.974 "rw_mbytes_per_sec": 0, 00:16:43.974 "r_mbytes_per_sec": 0, 00:16:43.974 "w_mbytes_per_sec": 0 00:16:43.974 }, 00:16:43.974 "claimed": true, 00:16:43.974 "claim_type": "exclusive_write", 00:16:43.974 "zoned": false, 00:16:43.974 "supported_io_types": { 00:16:43.974 "read": true, 00:16:43.974 "write": true, 00:16:43.974 "unmap": true, 00:16:43.974 "flush": true, 00:16:43.974 "reset": true, 00:16:43.974 "nvme_admin": false, 00:16:43.974 "nvme_io": false, 00:16:43.974 "nvme_io_md": false, 00:16:43.974 "write_zeroes": true, 00:16:43.974 "zcopy": true, 00:16:43.974 "get_zone_info": false, 00:16:43.974 "zone_management": false, 00:16:43.974 "zone_append": false, 00:16:43.974 "compare": false, 00:16:43.974 "compare_and_write": false, 00:16:43.974 "abort": true, 00:16:43.974 "seek_hole": false, 00:16:43.974 "seek_data": false, 00:16:43.974 "copy": true, 00:16:43.974 "nvme_iov_md": false 00:16:43.974 }, 00:16:43.974 "memory_domains": [ 00:16:43.974 { 00:16:43.974 "dma_device_id": "system", 00:16:43.974 "dma_device_type": 1 00:16:43.974 }, 00:16:43.974 { 00:16:43.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.974 "dma_device_type": 2 00:16:43.974 } 00:16:43.974 ], 00:16:43.974 "driver_specific": { 00:16:43.974 "passthru": { 00:16:43.974 "name": "pt1", 00:16:43.974 "base_bdev_name": "malloc1" 00:16:43.974 } 00:16:43.974 } 00:16:43.974 }' 00:16:43.974 18:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:43.974 18:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:44.233 18:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:44.233 18:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:44.233 18:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:44.233 18:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:44.233 18:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:44.233 18:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:44.233 18:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:44.233 18:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:44.233 18:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:44.492 18:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:44.492 18:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:44.492 18:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:44.492 18:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:44.750 18:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:44.750 "name": "pt2", 00:16:44.750 "aliases": [ 00:16:44.750 "00000000-0000-0000-0000-000000000002" 00:16:44.750 ], 00:16:44.750 "product_name": "passthru", 00:16:44.750 "block_size": 512, 00:16:44.750 "num_blocks": 65536, 00:16:44.750 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:44.750 "assigned_rate_limits": { 00:16:44.750 "rw_ios_per_sec": 0, 00:16:44.750 "rw_mbytes_per_sec": 0, 00:16:44.750 "r_mbytes_per_sec": 0, 00:16:44.750 "w_mbytes_per_sec": 0 00:16:44.750 }, 00:16:44.750 "claimed": true, 00:16:44.750 "claim_type": "exclusive_write", 00:16:44.750 "zoned": false, 00:16:44.750 "supported_io_types": { 00:16:44.750 "read": true, 00:16:44.750 "write": true, 00:16:44.750 "unmap": true, 00:16:44.750 "flush": true, 00:16:44.750 "reset": true, 00:16:44.750 "nvme_admin": false, 00:16:44.750 "nvme_io": false, 00:16:44.750 "nvme_io_md": false, 00:16:44.750 "write_zeroes": true, 00:16:44.750 "zcopy": true, 00:16:44.750 "get_zone_info": false, 00:16:44.750 "zone_management": false, 00:16:44.750 "zone_append": false, 00:16:44.750 "compare": false, 00:16:44.750 "compare_and_write": false, 00:16:44.750 "abort": true, 00:16:44.750 "seek_hole": false, 00:16:44.751 "seek_data": false, 00:16:44.751 "copy": true, 00:16:44.751 "nvme_iov_md": false 00:16:44.751 }, 00:16:44.751 "memory_domains": [ 00:16:44.751 { 00:16:44.751 "dma_device_id": "system", 00:16:44.751 "dma_device_type": 1 00:16:44.751 }, 00:16:44.751 { 00:16:44.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.751 "dma_device_type": 2 00:16:44.751 } 00:16:44.751 ], 00:16:44.751 "driver_specific": { 00:16:44.751 "passthru": { 00:16:44.751 "name": "pt2", 00:16:44.751 "base_bdev_name": "malloc2" 00:16:44.751 } 00:16:44.751 } 00:16:44.751 }' 00:16:44.751 18:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:44.751 18:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:44.751 18:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:44.751 18:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:44.751 18:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:44.751 18:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:44.751 18:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:45.009 18:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:45.009 18:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:45.009 18:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:45.009 18:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:45.009 18:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:45.009 18:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:45.009 18:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:45.009 18:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:45.267 18:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:45.267 "name": "pt3", 00:16:45.267 "aliases": [ 00:16:45.267 "00000000-0000-0000-0000-000000000003" 00:16:45.267 ], 00:16:45.267 "product_name": "passthru", 00:16:45.267 "block_size": 512, 00:16:45.268 "num_blocks": 65536, 00:16:45.268 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:45.268 "assigned_rate_limits": { 00:16:45.268 "rw_ios_per_sec": 0, 00:16:45.268 "rw_mbytes_per_sec": 0, 00:16:45.268 "r_mbytes_per_sec": 0, 00:16:45.268 "w_mbytes_per_sec": 0 00:16:45.268 }, 00:16:45.268 "claimed": true, 00:16:45.268 "claim_type": "exclusive_write", 00:16:45.268 "zoned": false, 00:16:45.268 "supported_io_types": { 00:16:45.268 "read": true, 00:16:45.268 "write": true, 00:16:45.268 "unmap": true, 00:16:45.268 "flush": true, 00:16:45.268 "reset": true, 00:16:45.268 "nvme_admin": false, 00:16:45.268 "nvme_io": false, 00:16:45.268 "nvme_io_md": false, 00:16:45.268 "write_zeroes": true, 00:16:45.268 "zcopy": true, 00:16:45.268 "get_zone_info": false, 00:16:45.268 "zone_management": false, 00:16:45.268 "zone_append": false, 00:16:45.268 "compare": false, 00:16:45.268 "compare_and_write": false, 00:16:45.268 "abort": true, 00:16:45.268 "seek_hole": false, 00:16:45.268 "seek_data": false, 00:16:45.268 "copy": true, 00:16:45.268 "nvme_iov_md": false 00:16:45.268 }, 00:16:45.268 "memory_domains": [ 00:16:45.268 { 00:16:45.268 "dma_device_id": "system", 00:16:45.268 "dma_device_type": 1 00:16:45.268 }, 00:16:45.268 { 00:16:45.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.268 "dma_device_type": 2 00:16:45.268 } 00:16:45.268 ], 00:16:45.268 "driver_specific": { 00:16:45.268 "passthru": { 00:16:45.268 "name": "pt3", 00:16:45.268 "base_bdev_name": "malloc3" 00:16:45.268 } 00:16:45.268 } 00:16:45.268 }' 00:16:45.268 18:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:45.268 18:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:45.268 18:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:45.268 18:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:45.268 18:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:45.526 18:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:45.526 18:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:45.526 18:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:45.526 18:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:45.526 18:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:45.526 18:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:45.526 18:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:45.526 18:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:45.526 18:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:16:45.785 [2024-07-15 18:31:31.279540] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:45.785 18:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' f748f8e8-bbba-4b81-ae31-f06828a75319 '!=' f748f8e8-bbba-4b81-ae31-f06828a75319 ']' 00:16:45.785 18:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:16:45.785 18:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:45.785 18:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:45.785 18:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:46.045 [2024-07-15 18:31:31.535961] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:46.045 18:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:46.045 18:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:46.045 18:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:46.045 18:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:46.045 18:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:46.045 18:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:46.045 18:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:46.045 18:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:46.045 18:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:46.045 18:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:46.045 18:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.045 18:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.302 18:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:46.302 "name": "raid_bdev1", 00:16:46.302 "uuid": "f748f8e8-bbba-4b81-ae31-f06828a75319", 00:16:46.302 "strip_size_kb": 0, 00:16:46.302 "state": "online", 00:16:46.302 "raid_level": "raid1", 00:16:46.302 "superblock": true, 00:16:46.302 "num_base_bdevs": 3, 00:16:46.302 "num_base_bdevs_discovered": 2, 00:16:46.302 "num_base_bdevs_operational": 2, 00:16:46.302 "base_bdevs_list": [ 00:16:46.302 { 00:16:46.302 "name": null, 00:16:46.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.302 "is_configured": false, 00:16:46.302 "data_offset": 2048, 00:16:46.302 "data_size": 63488 00:16:46.302 }, 00:16:46.302 { 00:16:46.302 "name": "pt2", 00:16:46.302 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:46.302 "is_configured": true, 00:16:46.302 "data_offset": 2048, 00:16:46.302 "data_size": 63488 00:16:46.302 }, 00:16:46.302 { 00:16:46.302 "name": "pt3", 00:16:46.302 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:46.302 "is_configured": true, 00:16:46.302 "data_offset": 2048, 00:16:46.302 "data_size": 63488 00:16:46.302 } 00:16:46.302 ] 00:16:46.302 }' 00:16:46.302 18:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:46.302 18:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.238 18:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:47.238 [2024-07-15 18:31:32.679007] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:47.238 [2024-07-15 18:31:32.679034] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:47.238 [2024-07-15 18:31:32.679083] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:47.238 [2024-07-15 18:31:32.679139] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:47.238 [2024-07-15 18:31:32.679149] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1dc5e00 name raid_bdev1, state offline 00:16:47.238 18:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.238 18:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:16:47.497 18:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:16:47.497 18:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:16:47.497 18:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:16:47.497 18:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:47.497 18:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:47.756 18:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:16:47.756 18:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:47.756 18:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:48.015 18:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:16:48.015 18:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:48.015 18:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:16:48.015 18:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:16:48.015 18:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:48.273 [2024-07-15 18:31:33.717748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:48.273 [2024-07-15 18:31:33.717791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.273 [2024-07-15 18:31:33.717805] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1dc9630 00:16:48.273 [2024-07-15 18:31:33.717814] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.273 [2024-07-15 18:31:33.719493] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.273 [2024-07-15 18:31:33.719522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:48.273 [2024-07-15 18:31:33.719583] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:48.273 [2024-07-15 18:31:33.719608] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:48.273 pt2 00:16:48.273 18:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:48.273 18:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:48.273 18:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:48.273 18:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:48.273 18:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:48.273 18:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:48.273 18:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:48.273 18:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:48.273 18:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:48.273 18:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:48.273 18:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.273 18:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.533 18:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:48.533 "name": "raid_bdev1", 00:16:48.533 "uuid": "f748f8e8-bbba-4b81-ae31-f06828a75319", 00:16:48.533 "strip_size_kb": 0, 00:16:48.533 "state": "configuring", 00:16:48.533 "raid_level": "raid1", 00:16:48.533 "superblock": true, 00:16:48.533 "num_base_bdevs": 3, 00:16:48.533 "num_base_bdevs_discovered": 1, 00:16:48.533 "num_base_bdevs_operational": 2, 00:16:48.533 "base_bdevs_list": [ 00:16:48.533 { 00:16:48.533 "name": null, 00:16:48.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.533 "is_configured": false, 00:16:48.533 "data_offset": 2048, 00:16:48.533 "data_size": 63488 00:16:48.533 }, 00:16:48.533 { 00:16:48.533 "name": "pt2", 00:16:48.533 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:48.533 "is_configured": true, 00:16:48.533 "data_offset": 2048, 00:16:48.533 "data_size": 63488 00:16:48.533 }, 00:16:48.533 { 00:16:48.533 "name": null, 00:16:48.533 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:48.533 "is_configured": false, 00:16:48.533 "data_offset": 2048, 00:16:48.533 "data_size": 63488 00:16:48.533 } 00:16:48.533 ] 00:16:48.533 }' 00:16:48.533 18:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:48.533 18:31:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.101 18:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:16:49.101 18:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:16:49.101 18:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:16:49.101 18:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:49.360 [2024-07-15 18:31:34.756547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:49.360 [2024-07-15 18:31:34.756594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.360 [2024-07-15 18:31:34.756610] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1dcb500 00:16:49.360 [2024-07-15 18:31:34.756619] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.360 [2024-07-15 18:31:34.756972] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.360 [2024-07-15 18:31:34.756989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:49.360 [2024-07-15 18:31:34.757050] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:49.360 [2024-07-15 18:31:34.757068] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:49.360 [2024-07-15 18:31:34.757165] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1dc6510 00:16:49.360 [2024-07-15 18:31:34.757174] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:49.360 [2024-07-15 18:31:34.757352] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1dc58a0 00:16:49.360 [2024-07-15 18:31:34.757481] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1dc6510 00:16:49.360 [2024-07-15 18:31:34.757490] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1dc6510 00:16:49.360 [2024-07-15 18:31:34.757600] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:49.360 pt3 00:16:49.361 18:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:49.361 18:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:49.361 18:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:49.361 18:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:49.361 18:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:49.361 18:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:49.361 18:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:49.361 18:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:49.361 18:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:49.361 18:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:49.361 18:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.361 18:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.620 18:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:49.620 "name": "raid_bdev1", 00:16:49.620 "uuid": "f748f8e8-bbba-4b81-ae31-f06828a75319", 00:16:49.620 "strip_size_kb": 0, 00:16:49.620 "state": "online", 00:16:49.620 "raid_level": "raid1", 00:16:49.620 "superblock": true, 00:16:49.620 "num_base_bdevs": 3, 00:16:49.620 "num_base_bdevs_discovered": 2, 00:16:49.620 "num_base_bdevs_operational": 2, 00:16:49.620 "base_bdevs_list": [ 00:16:49.620 { 00:16:49.620 "name": null, 00:16:49.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.620 "is_configured": false, 00:16:49.620 "data_offset": 2048, 00:16:49.620 "data_size": 63488 00:16:49.620 }, 00:16:49.620 { 00:16:49.620 "name": "pt2", 00:16:49.620 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:49.620 "is_configured": true, 00:16:49.620 "data_offset": 2048, 00:16:49.620 "data_size": 63488 00:16:49.620 }, 00:16:49.620 { 00:16:49.620 "name": "pt3", 00:16:49.620 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:49.620 "is_configured": true, 00:16:49.620 "data_offset": 2048, 00:16:49.620 "data_size": 63488 00:16:49.620 } 00:16:49.620 ] 00:16:49.620 }' 00:16:49.620 18:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:49.620 18:31:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.187 18:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:50.446 [2024-07-15 18:31:35.911755] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:50.446 [2024-07-15 18:31:35.911782] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:50.446 [2024-07-15 18:31:35.911834] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:50.446 [2024-07-15 18:31:35.911887] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:50.446 [2024-07-15 18:31:35.911896] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1dc6510 name raid_bdev1, state offline 00:16:50.446 18:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.446 18:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:16:50.704 18:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:16:50.705 18:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:16:50.705 18:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:16:50.705 18:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:16:50.705 18:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:50.963 18:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:51.248 [2024-07-15 18:31:36.673768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:51.248 [2024-07-15 18:31:36.673813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.248 [2024-07-15 18:31:36.673827] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1dcb0c0 00:16:51.248 [2024-07-15 18:31:36.673836] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.248 [2024-07-15 18:31:36.675518] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.248 [2024-07-15 18:31:36.675548] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:51.248 [2024-07-15 18:31:36.675611] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:51.248 [2024-07-15 18:31:36.675637] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:51.248 [2024-07-15 18:31:36.675740] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:51.248 [2024-07-15 18:31:36.675750] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:51.248 [2024-07-15 18:31:36.675762] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c193c0 name raid_bdev1, state configuring 00:16:51.248 [2024-07-15 18:31:36.675783] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:51.248 pt1 00:16:51.248 18:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:16:51.248 18:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:51.248 18:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:51.248 18:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:51.248 18:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:51.248 18:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:51.248 18:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:51.248 18:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:51.248 18:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:51.248 18:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:51.248 18:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:51.248 18:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.248 18:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.507 18:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:51.507 "name": "raid_bdev1", 00:16:51.507 "uuid": "f748f8e8-bbba-4b81-ae31-f06828a75319", 00:16:51.507 "strip_size_kb": 0, 00:16:51.507 "state": "configuring", 00:16:51.507 "raid_level": "raid1", 00:16:51.507 "superblock": true, 00:16:51.507 "num_base_bdevs": 3, 00:16:51.507 "num_base_bdevs_discovered": 1, 00:16:51.507 "num_base_bdevs_operational": 2, 00:16:51.507 "base_bdevs_list": [ 00:16:51.507 { 00:16:51.507 "name": null, 00:16:51.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.507 "is_configured": false, 00:16:51.507 "data_offset": 2048, 00:16:51.507 "data_size": 63488 00:16:51.507 }, 00:16:51.507 { 00:16:51.507 "name": "pt2", 00:16:51.507 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:51.507 "is_configured": true, 00:16:51.507 "data_offset": 2048, 00:16:51.507 "data_size": 63488 00:16:51.507 }, 00:16:51.507 { 00:16:51.507 "name": null, 00:16:51.507 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:51.507 "is_configured": false, 00:16:51.507 "data_offset": 2048, 00:16:51.507 "data_size": 63488 00:16:51.507 } 00:16:51.507 ] 00:16:51.507 }' 00:16:51.507 18:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:51.507 18:31:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.075 18:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:52.075 18:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:16:52.334 18:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:16:52.334 18:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:52.593 [2024-07-15 18:31:38.065533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:52.593 [2024-07-15 18:31:38.065587] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.593 [2024-07-15 18:31:38.065606] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1dcb500 00:16:52.593 [2024-07-15 18:31:38.065615] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.593 [2024-07-15 18:31:38.065984] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.593 [2024-07-15 18:31:38.066002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:52.593 [2024-07-15 18:31:38.066065] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:52.593 [2024-07-15 18:31:38.066084] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:52.593 [2024-07-15 18:31:38.066182] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c1a1f0 00:16:52.593 [2024-07-15 18:31:38.066191] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:52.593 [2024-07-15 18:31:38.066369] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1cab510 00:16:52.593 [2024-07-15 18:31:38.066500] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c1a1f0 00:16:52.593 [2024-07-15 18:31:38.066509] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c1a1f0 00:16:52.593 [2024-07-15 18:31:38.066609] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:52.593 pt3 00:16:52.593 18:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:52.593 18:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:52.593 18:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:52.593 18:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:52.593 18:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:52.593 18:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:52.593 18:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:52.593 18:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:52.593 18:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:52.593 18:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:52.593 18:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.593 18:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.852 18:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:52.852 "name": "raid_bdev1", 00:16:52.852 "uuid": "f748f8e8-bbba-4b81-ae31-f06828a75319", 00:16:52.852 "strip_size_kb": 0, 00:16:52.852 "state": "online", 00:16:52.852 "raid_level": "raid1", 00:16:52.852 "superblock": true, 00:16:52.852 "num_base_bdevs": 3, 00:16:52.852 "num_base_bdevs_discovered": 2, 00:16:52.852 "num_base_bdevs_operational": 2, 00:16:52.852 "base_bdevs_list": [ 00:16:52.852 { 00:16:52.852 "name": null, 00:16:52.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.852 "is_configured": false, 00:16:52.852 "data_offset": 2048, 00:16:52.852 "data_size": 63488 00:16:52.852 }, 00:16:52.852 { 00:16:52.852 "name": "pt2", 00:16:52.852 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:52.852 "is_configured": true, 00:16:52.852 "data_offset": 2048, 00:16:52.852 "data_size": 63488 00:16:52.852 }, 00:16:52.852 { 00:16:52.852 "name": "pt3", 00:16:52.853 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:52.853 "is_configured": true, 00:16:52.853 "data_offset": 2048, 00:16:52.853 "data_size": 63488 00:16:52.853 } 00:16:52.853 ] 00:16:52.853 }' 00:16:52.853 18:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:52.853 18:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.421 18:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:16:53.421 18:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:53.680 18:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:16:53.939 18:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:53.939 18:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:16:53.939 [2024-07-15 18:31:39.469596] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:54.197 18:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' f748f8e8-bbba-4b81-ae31-f06828a75319 '!=' f748f8e8-bbba-4b81-ae31-f06828a75319 ']' 00:16:54.197 18:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2824988 00:16:54.197 18:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2824988 ']' 00:16:54.197 18:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2824988 00:16:54.197 18:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:16:54.197 18:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:54.197 18:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2824988 00:16:54.197 18:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:54.197 18:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:54.197 18:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2824988' 00:16:54.197 killing process with pid 2824988 00:16:54.197 18:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2824988 00:16:54.197 [2024-07-15 18:31:39.541581] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:54.197 [2024-07-15 18:31:39.541641] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:54.197 [2024-07-15 18:31:39.541695] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:54.197 [2024-07-15 18:31:39.541705] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c1a1f0 name raid_bdev1, state offline 00:16:54.197 18:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2824988 00:16:54.197 [2024-07-15 18:31:39.568443] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:54.457 18:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:16:54.457 00:16:54.457 real 0m22.943s 00:16:54.457 user 0m42.998s 00:16:54.457 sys 0m3.187s 00:16:54.457 18:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:54.457 18:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.457 ************************************ 00:16:54.457 END TEST raid_superblock_test 00:16:54.457 ************************************ 00:16:54.457 18:31:39 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:54.457 18:31:39 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:16:54.457 18:31:39 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:54.457 18:31:39 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:54.457 18:31:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:54.457 ************************************ 00:16:54.457 START TEST raid_read_error_test 00:16:54.457 ************************************ 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 read 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.Zt4Q7ffpF1 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2828928 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2828928 /var/tmp/spdk-raid.sock 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2828928 ']' 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:54.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:54.457 18:31:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.457 [2024-07-15 18:31:39.883630] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:16:54.457 [2024-07-15 18:31:39.883691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2828928 ] 00:16:54.457 [2024-07-15 18:31:39.984649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.716 [2024-07-15 18:31:40.085700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.716 [2024-07-15 18:31:40.149388] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:54.716 [2024-07-15 18:31:40.149435] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:55.651 18:31:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:55.651 18:31:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:16:55.651 18:31:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:55.651 18:31:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:55.651 BaseBdev1_malloc 00:16:55.651 18:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:55.910 true 00:16:55.910 18:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:56.169 [2024-07-15 18:31:41.609026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:56.169 [2024-07-15 18:31:41.609071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.170 [2024-07-15 18:31:41.609088] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaead20 00:16:56.170 [2024-07-15 18:31:41.609098] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.170 [2024-07-15 18:31:41.610873] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.170 [2024-07-15 18:31:41.610903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:56.170 BaseBdev1 00:16:56.170 18:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:56.170 18:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:56.427 BaseBdev2_malloc 00:16:56.427 18:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:56.686 true 00:16:56.686 18:31:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:56.945 [2024-07-15 18:31:42.391551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:56.945 [2024-07-15 18:31:42.391593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.945 [2024-07-15 18:31:42.391608] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaefd50 00:16:56.945 [2024-07-15 18:31:42.391618] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.945 [2024-07-15 18:31:42.393123] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.945 [2024-07-15 18:31:42.393149] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:56.945 BaseBdev2 00:16:56.945 18:31:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:56.945 18:31:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:57.203 BaseBdev3_malloc 00:16:57.203 18:31:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:16:57.462 true 00:16:57.462 18:31:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:57.720 [2024-07-15 18:31:43.178097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:57.720 [2024-07-15 18:31:43.178139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.720 [2024-07-15 18:31:43.178156] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaeeef0 00:16:57.720 [2024-07-15 18:31:43.178171] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.720 [2024-07-15 18:31:43.179804] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.720 [2024-07-15 18:31:43.179832] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:57.720 BaseBdev3 00:16:57.720 18:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:16:57.981 [2024-07-15 18:31:43.438816] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:57.981 [2024-07-15 18:31:43.440094] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:57.981 [2024-07-15 18:31:43.440162] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:57.981 [2024-07-15 18:31:43.440372] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xaf2a00 00:16:57.981 [2024-07-15 18:31:43.440382] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:57.981 [2024-07-15 18:31:43.440573] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x946750 00:16:57.981 [2024-07-15 18:31:43.440737] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xaf2a00 00:16:57.981 [2024-07-15 18:31:43.440746] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xaf2a00 00:16:57.981 [2024-07-15 18:31:43.440851] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.981 18:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:57.981 18:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:57.981 18:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:57.981 18:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:57.981 18:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:57.981 18:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:57.981 18:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:57.981 18:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:57.981 18:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:57.981 18:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:57.981 18:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.981 18:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.239 18:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:58.239 "name": "raid_bdev1", 00:16:58.239 "uuid": "33f4964f-a032-4fdf-8421-8f3553227fbf", 00:16:58.239 "strip_size_kb": 0, 00:16:58.239 "state": "online", 00:16:58.239 "raid_level": "raid1", 00:16:58.239 "superblock": true, 00:16:58.239 "num_base_bdevs": 3, 00:16:58.239 "num_base_bdevs_discovered": 3, 00:16:58.239 "num_base_bdevs_operational": 3, 00:16:58.239 "base_bdevs_list": [ 00:16:58.239 { 00:16:58.239 "name": "BaseBdev1", 00:16:58.239 "uuid": "d3cd1c41-6429-506d-a9de-62cc0e205e41", 00:16:58.239 "is_configured": true, 00:16:58.239 "data_offset": 2048, 00:16:58.239 "data_size": 63488 00:16:58.239 }, 00:16:58.239 { 00:16:58.239 "name": "BaseBdev2", 00:16:58.239 "uuid": "2f43a429-7363-52f8-9956-e4e3c1243edb", 00:16:58.239 "is_configured": true, 00:16:58.239 "data_offset": 2048, 00:16:58.239 "data_size": 63488 00:16:58.239 }, 00:16:58.239 { 00:16:58.239 "name": "BaseBdev3", 00:16:58.239 "uuid": "4ed674ea-cbb2-5023-a3f5-f2dd03c1516b", 00:16:58.239 "is_configured": true, 00:16:58.239 "data_offset": 2048, 00:16:58.239 "data_size": 63488 00:16:58.239 } 00:16:58.239 ] 00:16:58.239 }' 00:16:58.239 18:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:58.239 18:31:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.805 18:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:58.805 18:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:59.063 [2024-07-15 18:31:44.461853] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xaf8930 00:16:59.997 18:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:00.255 18:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:00.255 18:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:00.255 18:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:17:00.255 18:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:17:00.255 18:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:00.255 18:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:00.255 18:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:00.255 18:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:00.255 18:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:00.255 18:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:00.255 18:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:00.255 18:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:00.255 18:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:00.255 18:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:00.255 18:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.255 18:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.513 18:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:00.513 "name": "raid_bdev1", 00:17:00.513 "uuid": "33f4964f-a032-4fdf-8421-8f3553227fbf", 00:17:00.513 "strip_size_kb": 0, 00:17:00.513 "state": "online", 00:17:00.513 "raid_level": "raid1", 00:17:00.513 "superblock": true, 00:17:00.513 "num_base_bdevs": 3, 00:17:00.513 "num_base_bdevs_discovered": 3, 00:17:00.513 "num_base_bdevs_operational": 3, 00:17:00.513 "base_bdevs_list": [ 00:17:00.513 { 00:17:00.513 "name": "BaseBdev1", 00:17:00.513 "uuid": "d3cd1c41-6429-506d-a9de-62cc0e205e41", 00:17:00.513 "is_configured": true, 00:17:00.513 "data_offset": 2048, 00:17:00.513 "data_size": 63488 00:17:00.513 }, 00:17:00.513 { 00:17:00.513 "name": "BaseBdev2", 00:17:00.513 "uuid": "2f43a429-7363-52f8-9956-e4e3c1243edb", 00:17:00.513 "is_configured": true, 00:17:00.513 "data_offset": 2048, 00:17:00.513 "data_size": 63488 00:17:00.513 }, 00:17:00.513 { 00:17:00.513 "name": "BaseBdev3", 00:17:00.513 "uuid": "4ed674ea-cbb2-5023-a3f5-f2dd03c1516b", 00:17:00.513 "is_configured": true, 00:17:00.513 "data_offset": 2048, 00:17:00.513 "data_size": 63488 00:17:00.513 } 00:17:00.513 ] 00:17:00.513 }' 00:17:00.513 18:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:00.513 18:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.078 18:31:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:01.335 [2024-07-15 18:31:46.840344] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:01.335 [2024-07-15 18:31:46.840386] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:01.335 [2024-07-15 18:31:46.843833] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:01.335 [2024-07-15 18:31:46.843868] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.335 [2024-07-15 18:31:46.843975] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:01.335 [2024-07-15 18:31:46.843986] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xaf2a00 name raid_bdev1, state offline 00:17:01.335 0 00:17:01.335 18:31:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2828928 00:17:01.335 18:31:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2828928 ']' 00:17:01.335 18:31:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2828928 00:17:01.335 18:31:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:17:01.335 18:31:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:01.335 18:31:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2828928 00:17:01.592 18:31:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:01.592 18:31:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:01.592 18:31:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2828928' 00:17:01.592 killing process with pid 2828928 00:17:01.592 18:31:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2828928 00:17:01.592 [2024-07-15 18:31:46.919621] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:01.592 18:31:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2828928 00:17:01.592 [2024-07-15 18:31:46.940000] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:01.592 18:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.Zt4Q7ffpF1 00:17:01.592 18:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:01.592 18:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:01.849 18:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:17:01.849 18:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:17:01.849 18:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:01.849 18:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:01.849 18:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:01.849 00:17:01.849 real 0m7.339s 00:17:01.849 user 0m12.039s 00:17:01.849 sys 0m1.001s 00:17:01.849 18:31:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:01.849 18:31:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.849 ************************************ 00:17:01.849 END TEST raid_read_error_test 00:17:01.849 ************************************ 00:17:01.849 18:31:47 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:01.850 18:31:47 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:17:01.850 18:31:47 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:01.850 18:31:47 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:01.850 18:31:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:01.850 ************************************ 00:17:01.850 START TEST raid_write_error_test 00:17:01.850 ************************************ 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 write 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.v6bWRAPpNT 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2830243 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2830243 /var/tmp/spdk-raid.sock 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2830243 ']' 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:01.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:01.850 18:31:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.850 [2024-07-15 18:31:47.264415] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:17:01.850 [2024-07-15 18:31:47.264476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2830243 ] 00:17:01.850 [2024-07-15 18:31:47.362930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.107 [2024-07-15 18:31:47.458089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.107 [2024-07-15 18:31:47.517035] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:02.107 [2024-07-15 18:31:47.517068] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:02.671 18:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:02.671 18:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:17:02.671 18:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:02.671 18:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:02.928 BaseBdev1_malloc 00:17:02.928 18:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:03.186 true 00:17:03.186 18:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:03.444 [2024-07-15 18:31:48.899500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:03.444 [2024-07-15 18:31:48.899539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.444 [2024-07-15 18:31:48.899556] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1466d20 00:17:03.444 [2024-07-15 18:31:48.899566] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.444 [2024-07-15 18:31:48.901343] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.444 [2024-07-15 18:31:48.901372] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:03.444 BaseBdev1 00:17:03.444 18:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:03.444 18:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:03.703 BaseBdev2_malloc 00:17:03.703 18:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:03.961 true 00:17:03.961 18:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:04.220 [2024-07-15 18:31:49.658096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:04.220 [2024-07-15 18:31:49.658139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.220 [2024-07-15 18:31:49.658156] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x146bd50 00:17:04.220 [2024-07-15 18:31:49.658165] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.220 [2024-07-15 18:31:49.659783] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.220 [2024-07-15 18:31:49.659810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:04.220 BaseBdev2 00:17:04.220 18:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:04.220 18:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:04.478 BaseBdev3_malloc 00:17:04.478 18:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:04.737 true 00:17:04.737 18:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:04.995 [2024-07-15 18:31:50.420678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:04.995 [2024-07-15 18:31:50.420721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.995 [2024-07-15 18:31:50.420738] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x146aef0 00:17:04.995 [2024-07-15 18:31:50.420747] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.995 [2024-07-15 18:31:50.422374] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.995 [2024-07-15 18:31:50.422402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:04.995 BaseBdev3 00:17:04.995 18:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:17:05.254 [2024-07-15 18:31:50.673377] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:05.254 [2024-07-15 18:31:50.674742] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:05.254 [2024-07-15 18:31:50.674811] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:05.254 [2024-07-15 18:31:50.675031] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x146ea00 00:17:05.254 [2024-07-15 18:31:50.675041] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:05.254 [2024-07-15 18:31:50.675242] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12c2750 00:17:05.254 [2024-07-15 18:31:50.675408] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x146ea00 00:17:05.254 [2024-07-15 18:31:50.675417] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x146ea00 00:17:05.254 [2024-07-15 18:31:50.675526] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.254 18:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:05.254 18:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:05.254 18:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:05.254 18:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:05.254 18:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:05.254 18:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:05.254 18:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:05.254 18:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:05.254 18:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:05.254 18:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:05.254 18:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.254 18:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.554 18:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:05.554 "name": "raid_bdev1", 00:17:05.554 "uuid": "79aad7e2-9939-43c0-a7c0-6acdae246d2c", 00:17:05.554 "strip_size_kb": 0, 00:17:05.554 "state": "online", 00:17:05.554 "raid_level": "raid1", 00:17:05.554 "superblock": true, 00:17:05.554 "num_base_bdevs": 3, 00:17:05.554 "num_base_bdevs_discovered": 3, 00:17:05.554 "num_base_bdevs_operational": 3, 00:17:05.554 "base_bdevs_list": [ 00:17:05.554 { 00:17:05.554 "name": "BaseBdev1", 00:17:05.554 "uuid": "b0e8ee15-96ed-502f-978b-0958391070a2", 00:17:05.554 "is_configured": true, 00:17:05.555 "data_offset": 2048, 00:17:05.555 "data_size": 63488 00:17:05.555 }, 00:17:05.555 { 00:17:05.555 "name": "BaseBdev2", 00:17:05.555 "uuid": "9a3660c8-d1e7-5c62-a0fb-540eb6491d6f", 00:17:05.555 "is_configured": true, 00:17:05.555 "data_offset": 2048, 00:17:05.555 "data_size": 63488 00:17:05.555 }, 00:17:05.555 { 00:17:05.555 "name": "BaseBdev3", 00:17:05.555 "uuid": "e4235cbd-cbdc-5f93-bc43-66c35f776e43", 00:17:05.555 "is_configured": true, 00:17:05.555 "data_offset": 2048, 00:17:05.555 "data_size": 63488 00:17:05.555 } 00:17:05.555 ] 00:17:05.555 }' 00:17:05.555 18:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:05.555 18:31:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.121 18:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:06.121 18:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:06.121 [2024-07-15 18:31:51.656396] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1474930 00:17:07.160 18:31:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:07.419 [2024-07-15 18:31:52.780315] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:17:07.419 [2024-07-15 18:31:52.780373] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:07.419 [2024-07-15 18:31:52.780568] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1474930 00:17:07.419 18:31:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:07.419 18:31:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:07.419 18:31:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:17:07.419 18:31:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=2 00:17:07.419 18:31:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:07.419 18:31:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:07.419 18:31:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:07.419 18:31:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:07.419 18:31:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:07.419 18:31:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:07.419 18:31:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:07.419 18:31:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:07.419 18:31:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:07.419 18:31:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:07.419 18:31:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.419 18:31:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.676 18:31:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:07.676 "name": "raid_bdev1", 00:17:07.676 "uuid": "79aad7e2-9939-43c0-a7c0-6acdae246d2c", 00:17:07.676 "strip_size_kb": 0, 00:17:07.676 "state": "online", 00:17:07.676 "raid_level": "raid1", 00:17:07.676 "superblock": true, 00:17:07.676 "num_base_bdevs": 3, 00:17:07.676 "num_base_bdevs_discovered": 2, 00:17:07.676 "num_base_bdevs_operational": 2, 00:17:07.676 "base_bdevs_list": [ 00:17:07.676 { 00:17:07.676 "name": null, 00:17:07.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.676 "is_configured": false, 00:17:07.676 "data_offset": 2048, 00:17:07.676 "data_size": 63488 00:17:07.676 }, 00:17:07.676 { 00:17:07.676 "name": "BaseBdev2", 00:17:07.676 "uuid": "9a3660c8-d1e7-5c62-a0fb-540eb6491d6f", 00:17:07.676 "is_configured": true, 00:17:07.676 "data_offset": 2048, 00:17:07.676 "data_size": 63488 00:17:07.676 }, 00:17:07.676 { 00:17:07.676 "name": "BaseBdev3", 00:17:07.676 "uuid": "e4235cbd-cbdc-5f93-bc43-66c35f776e43", 00:17:07.676 "is_configured": true, 00:17:07.676 "data_offset": 2048, 00:17:07.676 "data_size": 63488 00:17:07.676 } 00:17:07.676 ] 00:17:07.676 }' 00:17:07.676 18:31:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:07.676 18:31:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.243 18:31:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:08.502 [2024-07-15 18:31:53.871776] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:08.502 [2024-07-15 18:31:53.871807] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:08.502 [2024-07-15 18:31:53.875205] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:08.502 [2024-07-15 18:31:53.875237] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.502 [2024-07-15 18:31:53.875313] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:08.502 [2024-07-15 18:31:53.875321] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x146ea00 name raid_bdev1, state offline 00:17:08.502 0 00:17:08.502 18:31:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2830243 00:17:08.502 18:31:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2830243 ']' 00:17:08.502 18:31:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2830243 00:17:08.502 18:31:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:17:08.502 18:31:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:08.502 18:31:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2830243 00:17:08.502 18:31:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:08.502 18:31:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:08.502 18:31:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2830243' 00:17:08.502 killing process with pid 2830243 00:17:08.502 18:31:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2830243 00:17:08.502 [2024-07-15 18:31:53.935355] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:08.502 18:31:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2830243 00:17:08.502 [2024-07-15 18:31:53.955338] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:08.761 18:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.v6bWRAPpNT 00:17:08.761 18:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:08.761 18:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:08.761 18:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:17:08.761 18:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:17:08.761 18:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:08.761 18:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:08.761 18:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:08.761 00:17:08.761 real 0m6.972s 00:17:08.761 user 0m11.386s 00:17:08.761 sys 0m0.937s 00:17:08.761 18:31:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:08.761 18:31:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.761 ************************************ 00:17:08.761 END TEST raid_write_error_test 00:17:08.761 ************************************ 00:17:08.761 18:31:54 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:08.761 18:31:54 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:17:08.761 18:31:54 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:17:08.761 18:31:54 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:17:08.761 18:31:54 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:08.761 18:31:54 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:08.761 18:31:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:08.761 ************************************ 00:17:08.761 START TEST raid_state_function_test 00:17:08.761 ************************************ 00:17:08.761 18:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 false 00:17:08.761 18:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:17:08.761 18:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:17:08.761 18:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:17:08.761 18:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2831455 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2831455' 00:17:08.762 Process raid pid: 2831455 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2831455 /var/tmp/spdk-raid.sock 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2831455 ']' 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:08.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:08.762 18:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.762 [2024-07-15 18:31:54.273969] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:17:08.762 [2024-07-15 18:31:54.274030] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.020 [2024-07-15 18:31:54.372689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.020 [2024-07-15 18:31:54.468219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.020 [2024-07-15 18:31:54.532309] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:09.020 [2024-07-15 18:31:54.532342] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:09.956 18:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:09.956 18:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:17:09.956 18:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:09.956 [2024-07-15 18:31:55.468025] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:09.956 [2024-07-15 18:31:55.468069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:09.956 [2024-07-15 18:31:55.468078] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:09.956 [2024-07-15 18:31:55.468087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:09.956 [2024-07-15 18:31:55.468094] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:09.956 [2024-07-15 18:31:55.468102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:09.956 [2024-07-15 18:31:55.468108] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:09.956 [2024-07-15 18:31:55.468116] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:09.956 18:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:09.956 18:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:09.956 18:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:09.956 18:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:09.956 18:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:09.956 18:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:09.956 18:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:09.956 18:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:09.956 18:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:09.956 18:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:09.956 18:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.956 18:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.214 18:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:10.214 "name": "Existed_Raid", 00:17:10.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.214 "strip_size_kb": 64, 00:17:10.214 "state": "configuring", 00:17:10.214 "raid_level": "raid0", 00:17:10.214 "superblock": false, 00:17:10.214 "num_base_bdevs": 4, 00:17:10.214 "num_base_bdevs_discovered": 0, 00:17:10.214 "num_base_bdevs_operational": 4, 00:17:10.214 "base_bdevs_list": [ 00:17:10.214 { 00:17:10.214 "name": "BaseBdev1", 00:17:10.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.214 "is_configured": false, 00:17:10.214 "data_offset": 0, 00:17:10.214 "data_size": 0 00:17:10.214 }, 00:17:10.214 { 00:17:10.214 "name": "BaseBdev2", 00:17:10.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.214 "is_configured": false, 00:17:10.214 "data_offset": 0, 00:17:10.214 "data_size": 0 00:17:10.214 }, 00:17:10.214 { 00:17:10.214 "name": "BaseBdev3", 00:17:10.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.214 "is_configured": false, 00:17:10.214 "data_offset": 0, 00:17:10.214 "data_size": 0 00:17:10.214 }, 00:17:10.214 { 00:17:10.214 "name": "BaseBdev4", 00:17:10.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.214 "is_configured": false, 00:17:10.214 "data_offset": 0, 00:17:10.214 "data_size": 0 00:17:10.214 } 00:17:10.214 ] 00:17:10.214 }' 00:17:10.214 18:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:10.214 18:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.780 18:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:11.038 [2024-07-15 18:31:56.554792] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:11.039 [2024-07-15 18:31:56.554821] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2072bc0 name Existed_Raid, state configuring 00:17:11.039 18:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:11.297 [2024-07-15 18:31:56.811496] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:11.297 [2024-07-15 18:31:56.811522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:11.297 [2024-07-15 18:31:56.811529] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:11.297 [2024-07-15 18:31:56.811538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:11.297 [2024-07-15 18:31:56.811545] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:11.297 [2024-07-15 18:31:56.811553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:11.297 [2024-07-15 18:31:56.811559] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:11.297 [2024-07-15 18:31:56.811567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:11.297 18:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:11.555 [2024-07-15 18:31:57.077622] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:11.555 BaseBdev1 00:17:11.555 18:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:11.555 18:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:11.555 18:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:11.555 18:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:11.555 18:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:11.555 18:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:11.555 18:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:11.813 18:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:12.072 [ 00:17:12.072 { 00:17:12.072 "name": "BaseBdev1", 00:17:12.072 "aliases": [ 00:17:12.072 "92627401-2fbf-4004-a254-2281b9e19ecc" 00:17:12.072 ], 00:17:12.072 "product_name": "Malloc disk", 00:17:12.072 "block_size": 512, 00:17:12.072 "num_blocks": 65536, 00:17:12.072 "uuid": "92627401-2fbf-4004-a254-2281b9e19ecc", 00:17:12.072 "assigned_rate_limits": { 00:17:12.072 "rw_ios_per_sec": 0, 00:17:12.072 "rw_mbytes_per_sec": 0, 00:17:12.072 "r_mbytes_per_sec": 0, 00:17:12.072 "w_mbytes_per_sec": 0 00:17:12.072 }, 00:17:12.072 "claimed": true, 00:17:12.072 "claim_type": "exclusive_write", 00:17:12.072 "zoned": false, 00:17:12.072 "supported_io_types": { 00:17:12.072 "read": true, 00:17:12.072 "write": true, 00:17:12.072 "unmap": true, 00:17:12.072 "flush": true, 00:17:12.072 "reset": true, 00:17:12.072 "nvme_admin": false, 00:17:12.072 "nvme_io": false, 00:17:12.072 "nvme_io_md": false, 00:17:12.072 "write_zeroes": true, 00:17:12.072 "zcopy": true, 00:17:12.072 "get_zone_info": false, 00:17:12.072 "zone_management": false, 00:17:12.072 "zone_append": false, 00:17:12.072 "compare": false, 00:17:12.072 "compare_and_write": false, 00:17:12.072 "abort": true, 00:17:12.072 "seek_hole": false, 00:17:12.072 "seek_data": false, 00:17:12.072 "copy": true, 00:17:12.072 "nvme_iov_md": false 00:17:12.072 }, 00:17:12.072 "memory_domains": [ 00:17:12.072 { 00:17:12.072 "dma_device_id": "system", 00:17:12.072 "dma_device_type": 1 00:17:12.072 }, 00:17:12.072 { 00:17:12.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.072 "dma_device_type": 2 00:17:12.072 } 00:17:12.072 ], 00:17:12.072 "driver_specific": {} 00:17:12.072 } 00:17:12.072 ] 00:17:12.072 18:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:12.072 18:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:12.072 18:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:12.072 18:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:12.072 18:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:12.072 18:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:12.072 18:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:12.072 18:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:12.072 18:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:12.072 18:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:12.072 18:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:12.072 18:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.072 18:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.331 18:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:12.331 "name": "Existed_Raid", 00:17:12.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.331 "strip_size_kb": 64, 00:17:12.331 "state": "configuring", 00:17:12.331 "raid_level": "raid0", 00:17:12.331 "superblock": false, 00:17:12.331 "num_base_bdevs": 4, 00:17:12.331 "num_base_bdevs_discovered": 1, 00:17:12.331 "num_base_bdevs_operational": 4, 00:17:12.331 "base_bdevs_list": [ 00:17:12.331 { 00:17:12.331 "name": "BaseBdev1", 00:17:12.331 "uuid": "92627401-2fbf-4004-a254-2281b9e19ecc", 00:17:12.331 "is_configured": true, 00:17:12.331 "data_offset": 0, 00:17:12.331 "data_size": 65536 00:17:12.331 }, 00:17:12.331 { 00:17:12.331 "name": "BaseBdev2", 00:17:12.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.331 "is_configured": false, 00:17:12.331 "data_offset": 0, 00:17:12.331 "data_size": 0 00:17:12.331 }, 00:17:12.331 { 00:17:12.331 "name": "BaseBdev3", 00:17:12.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.331 "is_configured": false, 00:17:12.331 "data_offset": 0, 00:17:12.331 "data_size": 0 00:17:12.331 }, 00:17:12.331 { 00:17:12.331 "name": "BaseBdev4", 00:17:12.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.331 "is_configured": false, 00:17:12.331 "data_offset": 0, 00:17:12.331 "data_size": 0 00:17:12.331 } 00:17:12.331 ] 00:17:12.331 }' 00:17:12.331 18:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:12.331 18:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.268 18:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:13.268 [2024-07-15 18:31:58.766242] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:13.268 [2024-07-15 18:31:58.766280] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2072430 name Existed_Raid, state configuring 00:17:13.268 18:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:13.527 [2024-07-15 18:31:59.022969] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:13.527 [2024-07-15 18:31:59.024474] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:13.527 [2024-07-15 18:31:59.024505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:13.527 [2024-07-15 18:31:59.024513] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:13.527 [2024-07-15 18:31:59.024521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:13.527 [2024-07-15 18:31:59.024528] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:13.527 [2024-07-15 18:31:59.024536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:13.527 18:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:13.527 18:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:13.527 18:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:13.527 18:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:13.527 18:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:13.527 18:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:13.527 18:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:13.527 18:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:13.527 18:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:13.527 18:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:13.527 18:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:13.527 18:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:13.527 18:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.527 18:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.786 18:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:13.786 "name": "Existed_Raid", 00:17:13.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.786 "strip_size_kb": 64, 00:17:13.786 "state": "configuring", 00:17:13.786 "raid_level": "raid0", 00:17:13.786 "superblock": false, 00:17:13.786 "num_base_bdevs": 4, 00:17:13.786 "num_base_bdevs_discovered": 1, 00:17:13.786 "num_base_bdevs_operational": 4, 00:17:13.786 "base_bdevs_list": [ 00:17:13.786 { 00:17:13.786 "name": "BaseBdev1", 00:17:13.786 "uuid": "92627401-2fbf-4004-a254-2281b9e19ecc", 00:17:13.786 "is_configured": true, 00:17:13.786 "data_offset": 0, 00:17:13.786 "data_size": 65536 00:17:13.786 }, 00:17:13.786 { 00:17:13.786 "name": "BaseBdev2", 00:17:13.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.786 "is_configured": false, 00:17:13.786 "data_offset": 0, 00:17:13.786 "data_size": 0 00:17:13.786 }, 00:17:13.786 { 00:17:13.786 "name": "BaseBdev3", 00:17:13.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.786 "is_configured": false, 00:17:13.786 "data_offset": 0, 00:17:13.786 "data_size": 0 00:17:13.786 }, 00:17:13.786 { 00:17:13.786 "name": "BaseBdev4", 00:17:13.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.786 "is_configured": false, 00:17:13.786 "data_offset": 0, 00:17:13.786 "data_size": 0 00:17:13.786 } 00:17:13.786 ] 00:17:13.786 }' 00:17:13.786 18:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:13.786 18:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.720 18:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:14.720 [2024-07-15 18:32:00.217391] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:14.720 BaseBdev2 00:17:14.720 18:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:14.720 18:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:14.720 18:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:14.720 18:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:14.720 18:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:14.720 18:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:14.720 18:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:14.977 18:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:15.246 [ 00:17:15.246 { 00:17:15.246 "name": "BaseBdev2", 00:17:15.246 "aliases": [ 00:17:15.246 "8c226511-bd1f-4643-84cd-a4ac573f38c0" 00:17:15.246 ], 00:17:15.246 "product_name": "Malloc disk", 00:17:15.246 "block_size": 512, 00:17:15.246 "num_blocks": 65536, 00:17:15.246 "uuid": "8c226511-bd1f-4643-84cd-a4ac573f38c0", 00:17:15.246 "assigned_rate_limits": { 00:17:15.246 "rw_ios_per_sec": 0, 00:17:15.246 "rw_mbytes_per_sec": 0, 00:17:15.246 "r_mbytes_per_sec": 0, 00:17:15.246 "w_mbytes_per_sec": 0 00:17:15.246 }, 00:17:15.246 "claimed": true, 00:17:15.246 "claim_type": "exclusive_write", 00:17:15.246 "zoned": false, 00:17:15.246 "supported_io_types": { 00:17:15.246 "read": true, 00:17:15.246 "write": true, 00:17:15.246 "unmap": true, 00:17:15.246 "flush": true, 00:17:15.246 "reset": true, 00:17:15.246 "nvme_admin": false, 00:17:15.246 "nvme_io": false, 00:17:15.246 "nvme_io_md": false, 00:17:15.246 "write_zeroes": true, 00:17:15.246 "zcopy": true, 00:17:15.246 "get_zone_info": false, 00:17:15.246 "zone_management": false, 00:17:15.246 "zone_append": false, 00:17:15.246 "compare": false, 00:17:15.246 "compare_and_write": false, 00:17:15.246 "abort": true, 00:17:15.246 "seek_hole": false, 00:17:15.246 "seek_data": false, 00:17:15.246 "copy": true, 00:17:15.246 "nvme_iov_md": false 00:17:15.246 }, 00:17:15.246 "memory_domains": [ 00:17:15.246 { 00:17:15.246 "dma_device_id": "system", 00:17:15.246 "dma_device_type": 1 00:17:15.246 }, 00:17:15.246 { 00:17:15.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.246 "dma_device_type": 2 00:17:15.246 } 00:17:15.246 ], 00:17:15.246 "driver_specific": {} 00:17:15.246 } 00:17:15.246 ] 00:17:15.246 18:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:15.246 18:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:15.246 18:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:15.246 18:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:15.246 18:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:15.246 18:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:15.246 18:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:15.246 18:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:15.246 18:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:15.246 18:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:15.246 18:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:15.246 18:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:15.246 18:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:15.246 18:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.246 18:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.504 18:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:15.504 "name": "Existed_Raid", 00:17:15.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.504 "strip_size_kb": 64, 00:17:15.504 "state": "configuring", 00:17:15.504 "raid_level": "raid0", 00:17:15.504 "superblock": false, 00:17:15.504 "num_base_bdevs": 4, 00:17:15.504 "num_base_bdevs_discovered": 2, 00:17:15.504 "num_base_bdevs_operational": 4, 00:17:15.504 "base_bdevs_list": [ 00:17:15.504 { 00:17:15.504 "name": "BaseBdev1", 00:17:15.504 "uuid": "92627401-2fbf-4004-a254-2281b9e19ecc", 00:17:15.504 "is_configured": true, 00:17:15.504 "data_offset": 0, 00:17:15.504 "data_size": 65536 00:17:15.504 }, 00:17:15.504 { 00:17:15.504 "name": "BaseBdev2", 00:17:15.504 "uuid": "8c226511-bd1f-4643-84cd-a4ac573f38c0", 00:17:15.504 "is_configured": true, 00:17:15.504 "data_offset": 0, 00:17:15.504 "data_size": 65536 00:17:15.504 }, 00:17:15.504 { 00:17:15.504 "name": "BaseBdev3", 00:17:15.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.504 "is_configured": false, 00:17:15.504 "data_offset": 0, 00:17:15.504 "data_size": 0 00:17:15.504 }, 00:17:15.504 { 00:17:15.504 "name": "BaseBdev4", 00:17:15.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.504 "is_configured": false, 00:17:15.504 "data_offset": 0, 00:17:15.504 "data_size": 0 00:17:15.504 } 00:17:15.504 ] 00:17:15.504 }' 00:17:15.504 18:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:15.504 18:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.437 18:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:16.437 [2024-07-15 18:32:01.897101] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:16.437 BaseBdev3 00:17:16.437 18:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:16.437 18:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:16.437 18:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:16.437 18:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:16.437 18:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:16.437 18:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:16.437 18:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:16.696 18:32:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:16.954 [ 00:17:16.954 { 00:17:16.954 "name": "BaseBdev3", 00:17:16.954 "aliases": [ 00:17:16.954 "5b830d4e-e6af-4239-9946-46b42e2cb57d" 00:17:16.954 ], 00:17:16.954 "product_name": "Malloc disk", 00:17:16.954 "block_size": 512, 00:17:16.954 "num_blocks": 65536, 00:17:16.954 "uuid": "5b830d4e-e6af-4239-9946-46b42e2cb57d", 00:17:16.954 "assigned_rate_limits": { 00:17:16.954 "rw_ios_per_sec": 0, 00:17:16.954 "rw_mbytes_per_sec": 0, 00:17:16.954 "r_mbytes_per_sec": 0, 00:17:16.954 "w_mbytes_per_sec": 0 00:17:16.954 }, 00:17:16.954 "claimed": true, 00:17:16.954 "claim_type": "exclusive_write", 00:17:16.954 "zoned": false, 00:17:16.954 "supported_io_types": { 00:17:16.954 "read": true, 00:17:16.954 "write": true, 00:17:16.954 "unmap": true, 00:17:16.954 "flush": true, 00:17:16.954 "reset": true, 00:17:16.954 "nvme_admin": false, 00:17:16.954 "nvme_io": false, 00:17:16.954 "nvme_io_md": false, 00:17:16.954 "write_zeroes": true, 00:17:16.954 "zcopy": true, 00:17:16.954 "get_zone_info": false, 00:17:16.954 "zone_management": false, 00:17:16.954 "zone_append": false, 00:17:16.954 "compare": false, 00:17:16.954 "compare_and_write": false, 00:17:16.954 "abort": true, 00:17:16.954 "seek_hole": false, 00:17:16.954 "seek_data": false, 00:17:16.954 "copy": true, 00:17:16.954 "nvme_iov_md": false 00:17:16.954 }, 00:17:16.954 "memory_domains": [ 00:17:16.954 { 00:17:16.954 "dma_device_id": "system", 00:17:16.954 "dma_device_type": 1 00:17:16.954 }, 00:17:16.954 { 00:17:16.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.954 "dma_device_type": 2 00:17:16.954 } 00:17:16.954 ], 00:17:16.954 "driver_specific": {} 00:17:16.954 } 00:17:16.954 ] 00:17:16.954 18:32:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:16.954 18:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:16.954 18:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:16.954 18:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:16.954 18:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:16.954 18:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:16.954 18:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:16.954 18:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:16.955 18:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:16.955 18:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:16.955 18:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:16.955 18:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:16.955 18:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:16.955 18:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.955 18:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:17.213 18:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:17.213 "name": "Existed_Raid", 00:17:17.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.213 "strip_size_kb": 64, 00:17:17.213 "state": "configuring", 00:17:17.213 "raid_level": "raid0", 00:17:17.213 "superblock": false, 00:17:17.213 "num_base_bdevs": 4, 00:17:17.213 "num_base_bdevs_discovered": 3, 00:17:17.213 "num_base_bdevs_operational": 4, 00:17:17.213 "base_bdevs_list": [ 00:17:17.213 { 00:17:17.213 "name": "BaseBdev1", 00:17:17.213 "uuid": "92627401-2fbf-4004-a254-2281b9e19ecc", 00:17:17.213 "is_configured": true, 00:17:17.213 "data_offset": 0, 00:17:17.213 "data_size": 65536 00:17:17.213 }, 00:17:17.213 { 00:17:17.213 "name": "BaseBdev2", 00:17:17.213 "uuid": "8c226511-bd1f-4643-84cd-a4ac573f38c0", 00:17:17.213 "is_configured": true, 00:17:17.213 "data_offset": 0, 00:17:17.213 "data_size": 65536 00:17:17.213 }, 00:17:17.213 { 00:17:17.213 "name": "BaseBdev3", 00:17:17.213 "uuid": "5b830d4e-e6af-4239-9946-46b42e2cb57d", 00:17:17.213 "is_configured": true, 00:17:17.213 "data_offset": 0, 00:17:17.213 "data_size": 65536 00:17:17.213 }, 00:17:17.213 { 00:17:17.213 "name": "BaseBdev4", 00:17:17.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.213 "is_configured": false, 00:17:17.213 "data_offset": 0, 00:17:17.213 "data_size": 0 00:17:17.213 } 00:17:17.213 ] 00:17:17.213 }' 00:17:17.213 18:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:17.213 18:32:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.779 18:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:18.057 [2024-07-15 18:32:03.560935] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:18.057 [2024-07-15 18:32:03.560984] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2073490 00:17:18.057 [2024-07-15 18:32:03.560992] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:18.057 [2024-07-15 18:32:03.561191] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x205f2d0 00:17:18.057 [2024-07-15 18:32:03.561321] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2073490 00:17:18.057 [2024-07-15 18:32:03.561329] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2073490 00:17:18.057 [2024-07-15 18:32:03.561493] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.057 BaseBdev4 00:17:18.057 18:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:17:18.057 18:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:17:18.057 18:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:18.057 18:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:18.057 18:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:18.057 18:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:18.057 18:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:18.314 18:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:18.573 [ 00:17:18.573 { 00:17:18.573 "name": "BaseBdev4", 00:17:18.573 "aliases": [ 00:17:18.573 "845d84f1-b04d-4ef3-9b42-11cef12224f2" 00:17:18.573 ], 00:17:18.573 "product_name": "Malloc disk", 00:17:18.573 "block_size": 512, 00:17:18.573 "num_blocks": 65536, 00:17:18.573 "uuid": "845d84f1-b04d-4ef3-9b42-11cef12224f2", 00:17:18.573 "assigned_rate_limits": { 00:17:18.573 "rw_ios_per_sec": 0, 00:17:18.573 "rw_mbytes_per_sec": 0, 00:17:18.573 "r_mbytes_per_sec": 0, 00:17:18.573 "w_mbytes_per_sec": 0 00:17:18.573 }, 00:17:18.573 "claimed": true, 00:17:18.573 "claim_type": "exclusive_write", 00:17:18.573 "zoned": false, 00:17:18.573 "supported_io_types": { 00:17:18.573 "read": true, 00:17:18.573 "write": true, 00:17:18.573 "unmap": true, 00:17:18.573 "flush": true, 00:17:18.573 "reset": true, 00:17:18.573 "nvme_admin": false, 00:17:18.573 "nvme_io": false, 00:17:18.573 "nvme_io_md": false, 00:17:18.573 "write_zeroes": true, 00:17:18.573 "zcopy": true, 00:17:18.573 "get_zone_info": false, 00:17:18.573 "zone_management": false, 00:17:18.573 "zone_append": false, 00:17:18.573 "compare": false, 00:17:18.573 "compare_and_write": false, 00:17:18.573 "abort": true, 00:17:18.573 "seek_hole": false, 00:17:18.573 "seek_data": false, 00:17:18.573 "copy": true, 00:17:18.573 "nvme_iov_md": false 00:17:18.573 }, 00:17:18.573 "memory_domains": [ 00:17:18.573 { 00:17:18.573 "dma_device_id": "system", 00:17:18.573 "dma_device_type": 1 00:17:18.573 }, 00:17:18.573 { 00:17:18.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.573 "dma_device_type": 2 00:17:18.573 } 00:17:18.573 ], 00:17:18.573 "driver_specific": {} 00:17:18.573 } 00:17:18.573 ] 00:17:18.573 18:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:18.573 18:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:18.573 18:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:18.573 18:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:18.573 18:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:18.573 18:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:18.573 18:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:18.573 18:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:18.573 18:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:18.573 18:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:18.573 18:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:18.573 18:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:18.573 18:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:18.573 18:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:18.573 18:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.832 18:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:18.832 "name": "Existed_Raid", 00:17:18.832 "uuid": "eadbcc8b-4802-40d0-aae3-e5713ada7a1e", 00:17:18.832 "strip_size_kb": 64, 00:17:18.832 "state": "online", 00:17:18.832 "raid_level": "raid0", 00:17:18.832 "superblock": false, 00:17:18.832 "num_base_bdevs": 4, 00:17:18.832 "num_base_bdevs_discovered": 4, 00:17:18.832 "num_base_bdevs_operational": 4, 00:17:18.832 "base_bdevs_list": [ 00:17:18.832 { 00:17:18.832 "name": "BaseBdev1", 00:17:18.832 "uuid": "92627401-2fbf-4004-a254-2281b9e19ecc", 00:17:18.832 "is_configured": true, 00:17:18.832 "data_offset": 0, 00:17:18.832 "data_size": 65536 00:17:18.832 }, 00:17:18.832 { 00:17:18.832 "name": "BaseBdev2", 00:17:18.832 "uuid": "8c226511-bd1f-4643-84cd-a4ac573f38c0", 00:17:18.832 "is_configured": true, 00:17:18.832 "data_offset": 0, 00:17:18.832 "data_size": 65536 00:17:18.832 }, 00:17:18.832 { 00:17:18.832 "name": "BaseBdev3", 00:17:18.832 "uuid": "5b830d4e-e6af-4239-9946-46b42e2cb57d", 00:17:18.832 "is_configured": true, 00:17:18.832 "data_offset": 0, 00:17:18.832 "data_size": 65536 00:17:18.832 }, 00:17:18.832 { 00:17:18.832 "name": "BaseBdev4", 00:17:18.832 "uuid": "845d84f1-b04d-4ef3-9b42-11cef12224f2", 00:17:18.832 "is_configured": true, 00:17:18.832 "data_offset": 0, 00:17:18.832 "data_size": 65536 00:17:18.832 } 00:17:18.832 ] 00:17:18.832 }' 00:17:18.832 18:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:18.832 18:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.764 18:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:19.764 18:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:19.764 18:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:19.764 18:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:19.764 18:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:19.764 18:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:19.764 18:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:19.764 18:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:19.764 [2024-07-15 18:32:05.225978] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:19.764 18:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:19.764 "name": "Existed_Raid", 00:17:19.764 "aliases": [ 00:17:19.764 "eadbcc8b-4802-40d0-aae3-e5713ada7a1e" 00:17:19.764 ], 00:17:19.764 "product_name": "Raid Volume", 00:17:19.764 "block_size": 512, 00:17:19.764 "num_blocks": 262144, 00:17:19.764 "uuid": "eadbcc8b-4802-40d0-aae3-e5713ada7a1e", 00:17:19.764 "assigned_rate_limits": { 00:17:19.764 "rw_ios_per_sec": 0, 00:17:19.764 "rw_mbytes_per_sec": 0, 00:17:19.764 "r_mbytes_per_sec": 0, 00:17:19.764 "w_mbytes_per_sec": 0 00:17:19.764 }, 00:17:19.764 "claimed": false, 00:17:19.764 "zoned": false, 00:17:19.764 "supported_io_types": { 00:17:19.764 "read": true, 00:17:19.764 "write": true, 00:17:19.764 "unmap": true, 00:17:19.764 "flush": true, 00:17:19.764 "reset": true, 00:17:19.764 "nvme_admin": false, 00:17:19.764 "nvme_io": false, 00:17:19.764 "nvme_io_md": false, 00:17:19.764 "write_zeroes": true, 00:17:19.764 "zcopy": false, 00:17:19.764 "get_zone_info": false, 00:17:19.764 "zone_management": false, 00:17:19.764 "zone_append": false, 00:17:19.764 "compare": false, 00:17:19.764 "compare_and_write": false, 00:17:19.764 "abort": false, 00:17:19.764 "seek_hole": false, 00:17:19.764 "seek_data": false, 00:17:19.764 "copy": false, 00:17:19.764 "nvme_iov_md": false 00:17:19.764 }, 00:17:19.764 "memory_domains": [ 00:17:19.764 { 00:17:19.764 "dma_device_id": "system", 00:17:19.764 "dma_device_type": 1 00:17:19.764 }, 00:17:19.764 { 00:17:19.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.764 "dma_device_type": 2 00:17:19.764 }, 00:17:19.764 { 00:17:19.764 "dma_device_id": "system", 00:17:19.764 "dma_device_type": 1 00:17:19.764 }, 00:17:19.764 { 00:17:19.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.764 "dma_device_type": 2 00:17:19.764 }, 00:17:19.764 { 00:17:19.764 "dma_device_id": "system", 00:17:19.764 "dma_device_type": 1 00:17:19.764 }, 00:17:19.764 { 00:17:19.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.764 "dma_device_type": 2 00:17:19.764 }, 00:17:19.764 { 00:17:19.764 "dma_device_id": "system", 00:17:19.764 "dma_device_type": 1 00:17:19.764 }, 00:17:19.764 { 00:17:19.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.764 "dma_device_type": 2 00:17:19.764 } 00:17:19.764 ], 00:17:19.764 "driver_specific": { 00:17:19.764 "raid": { 00:17:19.764 "uuid": "eadbcc8b-4802-40d0-aae3-e5713ada7a1e", 00:17:19.764 "strip_size_kb": 64, 00:17:19.764 "state": "online", 00:17:19.764 "raid_level": "raid0", 00:17:19.764 "superblock": false, 00:17:19.764 "num_base_bdevs": 4, 00:17:19.764 "num_base_bdevs_discovered": 4, 00:17:19.764 "num_base_bdevs_operational": 4, 00:17:19.764 "base_bdevs_list": [ 00:17:19.764 { 00:17:19.764 "name": "BaseBdev1", 00:17:19.764 "uuid": "92627401-2fbf-4004-a254-2281b9e19ecc", 00:17:19.764 "is_configured": true, 00:17:19.764 "data_offset": 0, 00:17:19.764 "data_size": 65536 00:17:19.764 }, 00:17:19.764 { 00:17:19.764 "name": "BaseBdev2", 00:17:19.764 "uuid": "8c226511-bd1f-4643-84cd-a4ac573f38c0", 00:17:19.764 "is_configured": true, 00:17:19.764 "data_offset": 0, 00:17:19.764 "data_size": 65536 00:17:19.764 }, 00:17:19.764 { 00:17:19.764 "name": "BaseBdev3", 00:17:19.764 "uuid": "5b830d4e-e6af-4239-9946-46b42e2cb57d", 00:17:19.764 "is_configured": true, 00:17:19.764 "data_offset": 0, 00:17:19.764 "data_size": 65536 00:17:19.764 }, 00:17:19.764 { 00:17:19.764 "name": "BaseBdev4", 00:17:19.764 "uuid": "845d84f1-b04d-4ef3-9b42-11cef12224f2", 00:17:19.764 "is_configured": true, 00:17:19.764 "data_offset": 0, 00:17:19.764 "data_size": 65536 00:17:19.764 } 00:17:19.765 ] 00:17:19.765 } 00:17:19.765 } 00:17:19.765 }' 00:17:19.765 18:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:19.765 18:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:19.765 BaseBdev2 00:17:19.765 BaseBdev3 00:17:19.765 BaseBdev4' 00:17:19.765 18:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:19.765 18:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:19.765 18:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:20.022 18:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:20.022 "name": "BaseBdev1", 00:17:20.022 "aliases": [ 00:17:20.022 "92627401-2fbf-4004-a254-2281b9e19ecc" 00:17:20.022 ], 00:17:20.022 "product_name": "Malloc disk", 00:17:20.022 "block_size": 512, 00:17:20.022 "num_blocks": 65536, 00:17:20.022 "uuid": "92627401-2fbf-4004-a254-2281b9e19ecc", 00:17:20.022 "assigned_rate_limits": { 00:17:20.022 "rw_ios_per_sec": 0, 00:17:20.022 "rw_mbytes_per_sec": 0, 00:17:20.022 "r_mbytes_per_sec": 0, 00:17:20.022 "w_mbytes_per_sec": 0 00:17:20.022 }, 00:17:20.022 "claimed": true, 00:17:20.022 "claim_type": "exclusive_write", 00:17:20.022 "zoned": false, 00:17:20.022 "supported_io_types": { 00:17:20.022 "read": true, 00:17:20.022 "write": true, 00:17:20.022 "unmap": true, 00:17:20.022 "flush": true, 00:17:20.022 "reset": true, 00:17:20.022 "nvme_admin": false, 00:17:20.022 "nvme_io": false, 00:17:20.022 "nvme_io_md": false, 00:17:20.022 "write_zeroes": true, 00:17:20.022 "zcopy": true, 00:17:20.022 "get_zone_info": false, 00:17:20.022 "zone_management": false, 00:17:20.022 "zone_append": false, 00:17:20.022 "compare": false, 00:17:20.022 "compare_and_write": false, 00:17:20.022 "abort": true, 00:17:20.022 "seek_hole": false, 00:17:20.022 "seek_data": false, 00:17:20.022 "copy": true, 00:17:20.022 "nvme_iov_md": false 00:17:20.022 }, 00:17:20.022 "memory_domains": [ 00:17:20.022 { 00:17:20.022 "dma_device_id": "system", 00:17:20.022 "dma_device_type": 1 00:17:20.022 }, 00:17:20.022 { 00:17:20.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.022 "dma_device_type": 2 00:17:20.022 } 00:17:20.022 ], 00:17:20.022 "driver_specific": {} 00:17:20.022 }' 00:17:20.022 18:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:20.280 18:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:20.280 18:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:20.280 18:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:20.280 18:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:20.538 18:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:20.538 18:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:20.538 18:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:20.538 18:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:20.538 18:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:20.538 18:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:20.795 18:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:20.795 18:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:20.795 18:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:20.795 18:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:21.054 18:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:21.054 "name": "BaseBdev2", 00:17:21.054 "aliases": [ 00:17:21.054 "8c226511-bd1f-4643-84cd-a4ac573f38c0" 00:17:21.054 ], 00:17:21.054 "product_name": "Malloc disk", 00:17:21.054 "block_size": 512, 00:17:21.054 "num_blocks": 65536, 00:17:21.054 "uuid": "8c226511-bd1f-4643-84cd-a4ac573f38c0", 00:17:21.054 "assigned_rate_limits": { 00:17:21.054 "rw_ios_per_sec": 0, 00:17:21.054 "rw_mbytes_per_sec": 0, 00:17:21.054 "r_mbytes_per_sec": 0, 00:17:21.054 "w_mbytes_per_sec": 0 00:17:21.054 }, 00:17:21.054 "claimed": true, 00:17:21.054 "claim_type": "exclusive_write", 00:17:21.054 "zoned": false, 00:17:21.054 "supported_io_types": { 00:17:21.054 "read": true, 00:17:21.054 "write": true, 00:17:21.054 "unmap": true, 00:17:21.054 "flush": true, 00:17:21.054 "reset": true, 00:17:21.054 "nvme_admin": false, 00:17:21.054 "nvme_io": false, 00:17:21.054 "nvme_io_md": false, 00:17:21.054 "write_zeroes": true, 00:17:21.054 "zcopy": true, 00:17:21.054 "get_zone_info": false, 00:17:21.054 "zone_management": false, 00:17:21.054 "zone_append": false, 00:17:21.054 "compare": false, 00:17:21.054 "compare_and_write": false, 00:17:21.054 "abort": true, 00:17:21.054 "seek_hole": false, 00:17:21.054 "seek_data": false, 00:17:21.054 "copy": true, 00:17:21.054 "nvme_iov_md": false 00:17:21.054 }, 00:17:21.054 "memory_domains": [ 00:17:21.054 { 00:17:21.054 "dma_device_id": "system", 00:17:21.054 "dma_device_type": 1 00:17:21.054 }, 00:17:21.054 { 00:17:21.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.054 "dma_device_type": 2 00:17:21.054 } 00:17:21.054 ], 00:17:21.054 "driver_specific": {} 00:17:21.054 }' 00:17:21.054 18:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:21.054 18:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:21.054 18:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:21.054 18:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:21.311 18:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:21.311 18:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:21.311 18:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:21.311 18:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:21.311 18:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:21.311 18:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:21.569 18:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:21.569 18:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:21.569 18:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:21.569 18:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:21.569 18:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:21.828 18:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:21.828 "name": "BaseBdev3", 00:17:21.828 "aliases": [ 00:17:21.828 "5b830d4e-e6af-4239-9946-46b42e2cb57d" 00:17:21.828 ], 00:17:21.828 "product_name": "Malloc disk", 00:17:21.828 "block_size": 512, 00:17:21.828 "num_blocks": 65536, 00:17:21.828 "uuid": "5b830d4e-e6af-4239-9946-46b42e2cb57d", 00:17:21.828 "assigned_rate_limits": { 00:17:21.828 "rw_ios_per_sec": 0, 00:17:21.828 "rw_mbytes_per_sec": 0, 00:17:21.828 "r_mbytes_per_sec": 0, 00:17:21.828 "w_mbytes_per_sec": 0 00:17:21.828 }, 00:17:21.828 "claimed": true, 00:17:21.828 "claim_type": "exclusive_write", 00:17:21.828 "zoned": false, 00:17:21.828 "supported_io_types": { 00:17:21.828 "read": true, 00:17:21.828 "write": true, 00:17:21.828 "unmap": true, 00:17:21.828 "flush": true, 00:17:21.828 "reset": true, 00:17:21.828 "nvme_admin": false, 00:17:21.828 "nvme_io": false, 00:17:21.828 "nvme_io_md": false, 00:17:21.828 "write_zeroes": true, 00:17:21.828 "zcopy": true, 00:17:21.828 "get_zone_info": false, 00:17:21.828 "zone_management": false, 00:17:21.828 "zone_append": false, 00:17:21.828 "compare": false, 00:17:21.828 "compare_and_write": false, 00:17:21.828 "abort": true, 00:17:21.828 "seek_hole": false, 00:17:21.828 "seek_data": false, 00:17:21.828 "copy": true, 00:17:21.828 "nvme_iov_md": false 00:17:21.828 }, 00:17:21.828 "memory_domains": [ 00:17:21.828 { 00:17:21.828 "dma_device_id": "system", 00:17:21.828 "dma_device_type": 1 00:17:21.828 }, 00:17:21.828 { 00:17:21.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.828 "dma_device_type": 2 00:17:21.828 } 00:17:21.828 ], 00:17:21.828 "driver_specific": {} 00:17:21.828 }' 00:17:21.828 18:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:21.828 18:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:21.828 18:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:21.828 18:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:22.087 18:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:22.087 18:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:22.087 18:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:22.087 18:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:22.087 18:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:22.087 18:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:22.345 18:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:22.345 18:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:22.345 18:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:22.345 18:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:17:22.345 18:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:22.910 18:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:22.910 "name": "BaseBdev4", 00:17:22.910 "aliases": [ 00:17:22.910 "845d84f1-b04d-4ef3-9b42-11cef12224f2" 00:17:22.910 ], 00:17:22.910 "product_name": "Malloc disk", 00:17:22.910 "block_size": 512, 00:17:22.910 "num_blocks": 65536, 00:17:22.910 "uuid": "845d84f1-b04d-4ef3-9b42-11cef12224f2", 00:17:22.910 "assigned_rate_limits": { 00:17:22.910 "rw_ios_per_sec": 0, 00:17:22.910 "rw_mbytes_per_sec": 0, 00:17:22.910 "r_mbytes_per_sec": 0, 00:17:22.910 "w_mbytes_per_sec": 0 00:17:22.910 }, 00:17:22.910 "claimed": true, 00:17:22.910 "claim_type": "exclusive_write", 00:17:22.910 "zoned": false, 00:17:22.910 "supported_io_types": { 00:17:22.910 "read": true, 00:17:22.910 "write": true, 00:17:22.910 "unmap": true, 00:17:22.910 "flush": true, 00:17:22.910 "reset": true, 00:17:22.910 "nvme_admin": false, 00:17:22.910 "nvme_io": false, 00:17:22.910 "nvme_io_md": false, 00:17:22.910 "write_zeroes": true, 00:17:22.910 "zcopy": true, 00:17:22.910 "get_zone_info": false, 00:17:22.910 "zone_management": false, 00:17:22.910 "zone_append": false, 00:17:22.910 "compare": false, 00:17:22.910 "compare_and_write": false, 00:17:22.910 "abort": true, 00:17:22.910 "seek_hole": false, 00:17:22.910 "seek_data": false, 00:17:22.910 "copy": true, 00:17:22.910 "nvme_iov_md": false 00:17:22.910 }, 00:17:22.910 "memory_domains": [ 00:17:22.910 { 00:17:22.910 "dma_device_id": "system", 00:17:22.910 "dma_device_type": 1 00:17:22.910 }, 00:17:22.910 { 00:17:22.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.910 "dma_device_type": 2 00:17:22.911 } 00:17:22.911 ], 00:17:22.911 "driver_specific": {} 00:17:22.911 }' 00:17:22.911 18:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:22.911 18:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:22.911 18:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:22.911 18:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:23.169 18:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:23.169 18:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:23.169 18:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:23.169 18:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:23.169 18:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:23.169 18:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:23.428 18:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:23.428 18:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:23.428 18:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:23.687 [2024-07-15 18:32:09.156230] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:23.687 [2024-07-15 18:32:09.156256] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:23.687 [2024-07-15 18:32:09.156304] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:23.687 18:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:23.687 18:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:17:23.687 18:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:23.687 18:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:23.687 18:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:23.687 18:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:17:23.687 18:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:23.687 18:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:23.687 18:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:23.687 18:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:23.687 18:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:23.687 18:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:23.687 18:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:23.687 18:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:23.687 18:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:23.687 18:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.687 18:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.254 18:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:24.254 "name": "Existed_Raid", 00:17:24.254 "uuid": "eadbcc8b-4802-40d0-aae3-e5713ada7a1e", 00:17:24.254 "strip_size_kb": 64, 00:17:24.254 "state": "offline", 00:17:24.254 "raid_level": "raid0", 00:17:24.254 "superblock": false, 00:17:24.254 "num_base_bdevs": 4, 00:17:24.254 "num_base_bdevs_discovered": 3, 00:17:24.254 "num_base_bdevs_operational": 3, 00:17:24.254 "base_bdevs_list": [ 00:17:24.254 { 00:17:24.254 "name": null, 00:17:24.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.254 "is_configured": false, 00:17:24.254 "data_offset": 0, 00:17:24.254 "data_size": 65536 00:17:24.254 }, 00:17:24.254 { 00:17:24.254 "name": "BaseBdev2", 00:17:24.254 "uuid": "8c226511-bd1f-4643-84cd-a4ac573f38c0", 00:17:24.254 "is_configured": true, 00:17:24.254 "data_offset": 0, 00:17:24.254 "data_size": 65536 00:17:24.254 }, 00:17:24.254 { 00:17:24.254 "name": "BaseBdev3", 00:17:24.254 "uuid": "5b830d4e-e6af-4239-9946-46b42e2cb57d", 00:17:24.254 "is_configured": true, 00:17:24.254 "data_offset": 0, 00:17:24.254 "data_size": 65536 00:17:24.254 }, 00:17:24.254 { 00:17:24.254 "name": "BaseBdev4", 00:17:24.254 "uuid": "845d84f1-b04d-4ef3-9b42-11cef12224f2", 00:17:24.254 "is_configured": true, 00:17:24.254 "data_offset": 0, 00:17:24.254 "data_size": 65536 00:17:24.254 } 00:17:24.254 ] 00:17:24.254 }' 00:17:24.254 18:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:24.254 18:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.191 18:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:25.191 18:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:25.191 18:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.191 18:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:25.760 18:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:25.760 18:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:25.760 18:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:26.019 [2024-07-15 18:32:11.423523] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:26.019 18:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:26.019 18:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:26.019 18:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.019 18:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:26.586 18:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:26.586 18:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:26.586 18:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:26.845 [2024-07-15 18:32:12.192228] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:26.845 18:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:26.845 18:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:26.845 18:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.845 18:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:27.104 18:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:27.104 18:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:27.104 18:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:27.104 [2024-07-15 18:32:12.644236] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:27.104 [2024-07-15 18:32:12.644274] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2073490 name Existed_Raid, state offline 00:17:27.363 18:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:27.363 18:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:27.363 18:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.363 18:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:27.622 18:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:27.622 18:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:27.622 18:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:17:27.622 18:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:27.622 18:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:27.622 18:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:27.887 BaseBdev2 00:17:27.887 18:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:27.887 18:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:27.887 18:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:27.887 18:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:27.887 18:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:27.887 18:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:27.887 18:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:28.461 18:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:28.461 [ 00:17:28.461 { 00:17:28.461 "name": "BaseBdev2", 00:17:28.461 "aliases": [ 00:17:28.461 "ba9bcf66-0d3a-4d5a-b653-e459c8a2de94" 00:17:28.461 ], 00:17:28.461 "product_name": "Malloc disk", 00:17:28.461 "block_size": 512, 00:17:28.461 "num_blocks": 65536, 00:17:28.461 "uuid": "ba9bcf66-0d3a-4d5a-b653-e459c8a2de94", 00:17:28.461 "assigned_rate_limits": { 00:17:28.461 "rw_ios_per_sec": 0, 00:17:28.461 "rw_mbytes_per_sec": 0, 00:17:28.461 "r_mbytes_per_sec": 0, 00:17:28.461 "w_mbytes_per_sec": 0 00:17:28.461 }, 00:17:28.461 "claimed": false, 00:17:28.461 "zoned": false, 00:17:28.461 "supported_io_types": { 00:17:28.461 "read": true, 00:17:28.461 "write": true, 00:17:28.461 "unmap": true, 00:17:28.461 "flush": true, 00:17:28.461 "reset": true, 00:17:28.461 "nvme_admin": false, 00:17:28.461 "nvme_io": false, 00:17:28.461 "nvme_io_md": false, 00:17:28.461 "write_zeroes": true, 00:17:28.461 "zcopy": true, 00:17:28.461 "get_zone_info": false, 00:17:28.461 "zone_management": false, 00:17:28.461 "zone_append": false, 00:17:28.461 "compare": false, 00:17:28.461 "compare_and_write": false, 00:17:28.461 "abort": true, 00:17:28.461 "seek_hole": false, 00:17:28.461 "seek_data": false, 00:17:28.461 "copy": true, 00:17:28.461 "nvme_iov_md": false 00:17:28.461 }, 00:17:28.461 "memory_domains": [ 00:17:28.461 { 00:17:28.461 "dma_device_id": "system", 00:17:28.461 "dma_device_type": 1 00:17:28.461 }, 00:17:28.461 { 00:17:28.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.461 "dma_device_type": 2 00:17:28.461 } 00:17:28.461 ], 00:17:28.461 "driver_specific": {} 00:17:28.461 } 00:17:28.461 ] 00:17:28.720 18:32:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:28.720 18:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:28.720 18:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:28.720 18:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:28.979 BaseBdev3 00:17:28.979 18:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:28.979 18:32:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:28.979 18:32:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:28.979 18:32:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:28.979 18:32:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:28.979 18:32:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:28.979 18:32:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:29.547 18:32:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:29.806 [ 00:17:29.806 { 00:17:29.806 "name": "BaseBdev3", 00:17:29.806 "aliases": [ 00:17:29.806 "9fed79df-a70c-4def-9943-244a22714a3d" 00:17:29.806 ], 00:17:29.806 "product_name": "Malloc disk", 00:17:29.806 "block_size": 512, 00:17:29.806 "num_blocks": 65536, 00:17:29.806 "uuid": "9fed79df-a70c-4def-9943-244a22714a3d", 00:17:29.806 "assigned_rate_limits": { 00:17:29.806 "rw_ios_per_sec": 0, 00:17:29.806 "rw_mbytes_per_sec": 0, 00:17:29.806 "r_mbytes_per_sec": 0, 00:17:29.806 "w_mbytes_per_sec": 0 00:17:29.806 }, 00:17:29.806 "claimed": false, 00:17:29.806 "zoned": false, 00:17:29.806 "supported_io_types": { 00:17:29.806 "read": true, 00:17:29.806 "write": true, 00:17:29.806 "unmap": true, 00:17:29.806 "flush": true, 00:17:29.806 "reset": true, 00:17:29.806 "nvme_admin": false, 00:17:29.806 "nvme_io": false, 00:17:29.806 "nvme_io_md": false, 00:17:29.806 "write_zeroes": true, 00:17:29.806 "zcopy": true, 00:17:29.806 "get_zone_info": false, 00:17:29.806 "zone_management": false, 00:17:29.806 "zone_append": false, 00:17:29.806 "compare": false, 00:17:29.806 "compare_and_write": false, 00:17:29.806 "abort": true, 00:17:29.806 "seek_hole": false, 00:17:29.806 "seek_data": false, 00:17:29.806 "copy": true, 00:17:29.806 "nvme_iov_md": false 00:17:29.806 }, 00:17:29.806 "memory_domains": [ 00:17:29.806 { 00:17:29.806 "dma_device_id": "system", 00:17:29.806 "dma_device_type": 1 00:17:29.806 }, 00:17:29.806 { 00:17:29.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.806 "dma_device_type": 2 00:17:29.806 } 00:17:29.806 ], 00:17:29.806 "driver_specific": {} 00:17:29.806 } 00:17:29.806 ] 00:17:29.806 18:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:29.806 18:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:29.806 18:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:29.806 18:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:30.374 BaseBdev4 00:17:30.374 18:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:17:30.374 18:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:17:30.374 18:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:30.374 18:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:30.374 18:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:30.374 18:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:30.374 18:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:30.633 18:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:31.201 [ 00:17:31.201 { 00:17:31.201 "name": "BaseBdev4", 00:17:31.201 "aliases": [ 00:17:31.201 "508a0b9d-b63b-4b6a-a629-48f6499cc006" 00:17:31.201 ], 00:17:31.201 "product_name": "Malloc disk", 00:17:31.201 "block_size": 512, 00:17:31.201 "num_blocks": 65536, 00:17:31.201 "uuid": "508a0b9d-b63b-4b6a-a629-48f6499cc006", 00:17:31.201 "assigned_rate_limits": { 00:17:31.201 "rw_ios_per_sec": 0, 00:17:31.201 "rw_mbytes_per_sec": 0, 00:17:31.201 "r_mbytes_per_sec": 0, 00:17:31.201 "w_mbytes_per_sec": 0 00:17:31.201 }, 00:17:31.201 "claimed": false, 00:17:31.201 "zoned": false, 00:17:31.201 "supported_io_types": { 00:17:31.201 "read": true, 00:17:31.201 "write": true, 00:17:31.201 "unmap": true, 00:17:31.201 "flush": true, 00:17:31.201 "reset": true, 00:17:31.201 "nvme_admin": false, 00:17:31.201 "nvme_io": false, 00:17:31.201 "nvme_io_md": false, 00:17:31.201 "write_zeroes": true, 00:17:31.201 "zcopy": true, 00:17:31.201 "get_zone_info": false, 00:17:31.201 "zone_management": false, 00:17:31.201 "zone_append": false, 00:17:31.201 "compare": false, 00:17:31.201 "compare_and_write": false, 00:17:31.201 "abort": true, 00:17:31.201 "seek_hole": false, 00:17:31.201 "seek_data": false, 00:17:31.201 "copy": true, 00:17:31.201 "nvme_iov_md": false 00:17:31.201 }, 00:17:31.201 "memory_domains": [ 00:17:31.201 { 00:17:31.201 "dma_device_id": "system", 00:17:31.201 "dma_device_type": 1 00:17:31.201 }, 00:17:31.201 { 00:17:31.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.201 "dma_device_type": 2 00:17:31.201 } 00:17:31.201 ], 00:17:31.201 "driver_specific": {} 00:17:31.201 } 00:17:31.201 ] 00:17:31.201 18:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:31.201 18:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:31.201 18:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:31.201 18:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:31.460 [2024-07-15 18:32:16.766908] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:31.460 [2024-07-15 18:32:16.766943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:31.460 [2024-07-15 18:32:16.766966] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:31.460 [2024-07-15 18:32:16.768350] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:31.460 [2024-07-15 18:32:16.768391] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:31.460 18:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:31.460 18:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:31.460 18:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:31.460 18:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:31.460 18:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:31.460 18:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:31.460 18:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:31.460 18:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:31.460 18:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:31.460 18:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:31.460 18:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.460 18:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.028 18:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:32.028 "name": "Existed_Raid", 00:17:32.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.028 "strip_size_kb": 64, 00:17:32.028 "state": "configuring", 00:17:32.028 "raid_level": "raid0", 00:17:32.028 "superblock": false, 00:17:32.028 "num_base_bdevs": 4, 00:17:32.028 "num_base_bdevs_discovered": 3, 00:17:32.028 "num_base_bdevs_operational": 4, 00:17:32.028 "base_bdevs_list": [ 00:17:32.028 { 00:17:32.028 "name": "BaseBdev1", 00:17:32.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.028 "is_configured": false, 00:17:32.028 "data_offset": 0, 00:17:32.028 "data_size": 0 00:17:32.028 }, 00:17:32.028 { 00:17:32.028 "name": "BaseBdev2", 00:17:32.028 "uuid": "ba9bcf66-0d3a-4d5a-b653-e459c8a2de94", 00:17:32.028 "is_configured": true, 00:17:32.028 "data_offset": 0, 00:17:32.028 "data_size": 65536 00:17:32.028 }, 00:17:32.028 { 00:17:32.028 "name": "BaseBdev3", 00:17:32.028 "uuid": "9fed79df-a70c-4def-9943-244a22714a3d", 00:17:32.028 "is_configured": true, 00:17:32.028 "data_offset": 0, 00:17:32.028 "data_size": 65536 00:17:32.028 }, 00:17:32.028 { 00:17:32.028 "name": "BaseBdev4", 00:17:32.028 "uuid": "508a0b9d-b63b-4b6a-a629-48f6499cc006", 00:17:32.028 "is_configured": true, 00:17:32.028 "data_offset": 0, 00:17:32.028 "data_size": 65536 00:17:32.028 } 00:17:32.028 ] 00:17:32.028 }' 00:17:32.028 18:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:32.028 18:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.596 18:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:33.163 [2024-07-15 18:32:18.591869] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:33.163 18:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:33.163 18:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:33.163 18:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:33.163 18:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:33.163 18:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:33.163 18:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:33.163 18:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:33.163 18:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:33.163 18:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:33.163 18:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:33.163 18:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.163 18:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.752 18:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:33.752 "name": "Existed_Raid", 00:17:33.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.752 "strip_size_kb": 64, 00:17:33.752 "state": "configuring", 00:17:33.752 "raid_level": "raid0", 00:17:33.752 "superblock": false, 00:17:33.752 "num_base_bdevs": 4, 00:17:33.752 "num_base_bdevs_discovered": 2, 00:17:33.752 "num_base_bdevs_operational": 4, 00:17:33.752 "base_bdevs_list": [ 00:17:33.752 { 00:17:33.752 "name": "BaseBdev1", 00:17:33.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.752 "is_configured": false, 00:17:33.752 "data_offset": 0, 00:17:33.752 "data_size": 0 00:17:33.752 }, 00:17:33.752 { 00:17:33.752 "name": null, 00:17:33.752 "uuid": "ba9bcf66-0d3a-4d5a-b653-e459c8a2de94", 00:17:33.752 "is_configured": false, 00:17:33.752 "data_offset": 0, 00:17:33.752 "data_size": 65536 00:17:33.752 }, 00:17:33.752 { 00:17:33.752 "name": "BaseBdev3", 00:17:33.752 "uuid": "9fed79df-a70c-4def-9943-244a22714a3d", 00:17:33.752 "is_configured": true, 00:17:33.752 "data_offset": 0, 00:17:33.752 "data_size": 65536 00:17:33.752 }, 00:17:33.752 { 00:17:33.752 "name": "BaseBdev4", 00:17:33.752 "uuid": "508a0b9d-b63b-4b6a-a629-48f6499cc006", 00:17:33.752 "is_configured": true, 00:17:33.752 "data_offset": 0, 00:17:33.752 "data_size": 65536 00:17:33.752 } 00:17:33.752 ] 00:17:33.752 }' 00:17:33.752 18:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:33.752 18:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.440 18:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.440 18:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:34.440 18:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:34.440 18:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:34.698 [2024-07-15 18:32:20.235700] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:34.698 BaseBdev1 00:17:34.957 18:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:34.957 18:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:34.957 18:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:34.957 18:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:34.957 18:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:34.957 18:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:34.957 18:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:35.216 18:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:35.216 [ 00:17:35.216 { 00:17:35.216 "name": "BaseBdev1", 00:17:35.216 "aliases": [ 00:17:35.216 "064be989-40eb-400a-a552-c49387c0e0e3" 00:17:35.216 ], 00:17:35.216 "product_name": "Malloc disk", 00:17:35.216 "block_size": 512, 00:17:35.216 "num_blocks": 65536, 00:17:35.216 "uuid": "064be989-40eb-400a-a552-c49387c0e0e3", 00:17:35.216 "assigned_rate_limits": { 00:17:35.216 "rw_ios_per_sec": 0, 00:17:35.216 "rw_mbytes_per_sec": 0, 00:17:35.216 "r_mbytes_per_sec": 0, 00:17:35.216 "w_mbytes_per_sec": 0 00:17:35.216 }, 00:17:35.216 "claimed": true, 00:17:35.216 "claim_type": "exclusive_write", 00:17:35.216 "zoned": false, 00:17:35.216 "supported_io_types": { 00:17:35.216 "read": true, 00:17:35.216 "write": true, 00:17:35.216 "unmap": true, 00:17:35.216 "flush": true, 00:17:35.216 "reset": true, 00:17:35.216 "nvme_admin": false, 00:17:35.216 "nvme_io": false, 00:17:35.216 "nvme_io_md": false, 00:17:35.216 "write_zeroes": true, 00:17:35.216 "zcopy": true, 00:17:35.216 "get_zone_info": false, 00:17:35.216 "zone_management": false, 00:17:35.216 "zone_append": false, 00:17:35.216 "compare": false, 00:17:35.216 "compare_and_write": false, 00:17:35.216 "abort": true, 00:17:35.216 "seek_hole": false, 00:17:35.216 "seek_data": false, 00:17:35.216 "copy": true, 00:17:35.216 "nvme_iov_md": false 00:17:35.216 }, 00:17:35.216 "memory_domains": [ 00:17:35.216 { 00:17:35.216 "dma_device_id": "system", 00:17:35.216 "dma_device_type": 1 00:17:35.216 }, 00:17:35.216 { 00:17:35.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.216 "dma_device_type": 2 00:17:35.216 } 00:17:35.216 ], 00:17:35.216 "driver_specific": {} 00:17:35.216 } 00:17:35.216 ] 00:17:35.474 18:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:35.474 18:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:35.474 18:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:35.474 18:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:35.474 18:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:35.474 18:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:35.474 18:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:35.474 18:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:35.474 18:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:35.474 18:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:35.474 18:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:35.474 18:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.474 18:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.733 18:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:35.733 "name": "Existed_Raid", 00:17:35.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.733 "strip_size_kb": 64, 00:17:35.733 "state": "configuring", 00:17:35.733 "raid_level": "raid0", 00:17:35.733 "superblock": false, 00:17:35.733 "num_base_bdevs": 4, 00:17:35.733 "num_base_bdevs_discovered": 3, 00:17:35.733 "num_base_bdevs_operational": 4, 00:17:35.733 "base_bdevs_list": [ 00:17:35.733 { 00:17:35.733 "name": "BaseBdev1", 00:17:35.733 "uuid": "064be989-40eb-400a-a552-c49387c0e0e3", 00:17:35.733 "is_configured": true, 00:17:35.733 "data_offset": 0, 00:17:35.733 "data_size": 65536 00:17:35.733 }, 00:17:35.733 { 00:17:35.733 "name": null, 00:17:35.733 "uuid": "ba9bcf66-0d3a-4d5a-b653-e459c8a2de94", 00:17:35.733 "is_configured": false, 00:17:35.733 "data_offset": 0, 00:17:35.733 "data_size": 65536 00:17:35.733 }, 00:17:35.733 { 00:17:35.733 "name": "BaseBdev3", 00:17:35.733 "uuid": "9fed79df-a70c-4def-9943-244a22714a3d", 00:17:35.733 "is_configured": true, 00:17:35.733 "data_offset": 0, 00:17:35.733 "data_size": 65536 00:17:35.733 }, 00:17:35.733 { 00:17:35.733 "name": "BaseBdev4", 00:17:35.733 "uuid": "508a0b9d-b63b-4b6a-a629-48f6499cc006", 00:17:35.733 "is_configured": true, 00:17:35.733 "data_offset": 0, 00:17:35.733 "data_size": 65536 00:17:35.733 } 00:17:35.733 ] 00:17:35.733 }' 00:17:35.733 18:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:35.733 18:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.300 18:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.300 18:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:36.558 18:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:36.558 18:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:36.817 [2024-07-15 18:32:22.160921] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:36.817 18:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:36.817 18:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:36.817 18:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:36.817 18:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:36.817 18:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:36.817 18:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:36.817 18:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:36.817 18:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:36.817 18:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:36.817 18:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:36.817 18:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.817 18:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.076 18:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:37.076 "name": "Existed_Raid", 00:17:37.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.076 "strip_size_kb": 64, 00:17:37.076 "state": "configuring", 00:17:37.076 "raid_level": "raid0", 00:17:37.076 "superblock": false, 00:17:37.076 "num_base_bdevs": 4, 00:17:37.076 "num_base_bdevs_discovered": 2, 00:17:37.076 "num_base_bdevs_operational": 4, 00:17:37.076 "base_bdevs_list": [ 00:17:37.076 { 00:17:37.076 "name": "BaseBdev1", 00:17:37.076 "uuid": "064be989-40eb-400a-a552-c49387c0e0e3", 00:17:37.076 "is_configured": true, 00:17:37.076 "data_offset": 0, 00:17:37.076 "data_size": 65536 00:17:37.076 }, 00:17:37.076 { 00:17:37.076 "name": null, 00:17:37.076 "uuid": "ba9bcf66-0d3a-4d5a-b653-e459c8a2de94", 00:17:37.076 "is_configured": false, 00:17:37.076 "data_offset": 0, 00:17:37.076 "data_size": 65536 00:17:37.076 }, 00:17:37.076 { 00:17:37.076 "name": null, 00:17:37.076 "uuid": "9fed79df-a70c-4def-9943-244a22714a3d", 00:17:37.076 "is_configured": false, 00:17:37.076 "data_offset": 0, 00:17:37.076 "data_size": 65536 00:17:37.076 }, 00:17:37.076 { 00:17:37.076 "name": "BaseBdev4", 00:17:37.076 "uuid": "508a0b9d-b63b-4b6a-a629-48f6499cc006", 00:17:37.076 "is_configured": true, 00:17:37.076 "data_offset": 0, 00:17:37.076 "data_size": 65536 00:17:37.076 } 00:17:37.076 ] 00:17:37.076 }' 00:17:37.076 18:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:37.076 18:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.643 18:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.643 18:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:37.901 18:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:37.901 18:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:38.159 [2024-07-15 18:32:23.560719] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:38.159 18:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:38.159 18:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:38.159 18:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:38.159 18:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:38.159 18:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:38.159 18:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:38.159 18:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:38.159 18:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:38.159 18:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:38.159 18:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:38.159 18:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.159 18:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.418 18:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:38.418 "name": "Existed_Raid", 00:17:38.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.418 "strip_size_kb": 64, 00:17:38.418 "state": "configuring", 00:17:38.418 "raid_level": "raid0", 00:17:38.418 "superblock": false, 00:17:38.418 "num_base_bdevs": 4, 00:17:38.418 "num_base_bdevs_discovered": 3, 00:17:38.418 "num_base_bdevs_operational": 4, 00:17:38.418 "base_bdevs_list": [ 00:17:38.418 { 00:17:38.418 "name": "BaseBdev1", 00:17:38.418 "uuid": "064be989-40eb-400a-a552-c49387c0e0e3", 00:17:38.418 "is_configured": true, 00:17:38.418 "data_offset": 0, 00:17:38.418 "data_size": 65536 00:17:38.418 }, 00:17:38.418 { 00:17:38.418 "name": null, 00:17:38.418 "uuid": "ba9bcf66-0d3a-4d5a-b653-e459c8a2de94", 00:17:38.418 "is_configured": false, 00:17:38.418 "data_offset": 0, 00:17:38.418 "data_size": 65536 00:17:38.418 }, 00:17:38.418 { 00:17:38.418 "name": "BaseBdev3", 00:17:38.418 "uuid": "9fed79df-a70c-4def-9943-244a22714a3d", 00:17:38.418 "is_configured": true, 00:17:38.418 "data_offset": 0, 00:17:38.418 "data_size": 65536 00:17:38.418 }, 00:17:38.418 { 00:17:38.418 "name": "BaseBdev4", 00:17:38.418 "uuid": "508a0b9d-b63b-4b6a-a629-48f6499cc006", 00:17:38.418 "is_configured": true, 00:17:38.418 "data_offset": 0, 00:17:38.418 "data_size": 65536 00:17:38.418 } 00:17:38.418 ] 00:17:38.418 }' 00:17:38.418 18:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:38.418 18:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.984 18:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.984 18:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:39.241 18:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:39.241 18:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:39.498 [2024-07-15 18:32:24.868230] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:39.498 18:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:39.498 18:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:39.498 18:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:39.498 18:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:39.498 18:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:39.498 18:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:39.498 18:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:39.498 18:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:39.498 18:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:39.498 18:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:39.498 18:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.498 18:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.756 18:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:39.756 "name": "Existed_Raid", 00:17:39.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.756 "strip_size_kb": 64, 00:17:39.756 "state": "configuring", 00:17:39.756 "raid_level": "raid0", 00:17:39.756 "superblock": false, 00:17:39.756 "num_base_bdevs": 4, 00:17:39.756 "num_base_bdevs_discovered": 2, 00:17:39.756 "num_base_bdevs_operational": 4, 00:17:39.756 "base_bdevs_list": [ 00:17:39.756 { 00:17:39.756 "name": null, 00:17:39.756 "uuid": "064be989-40eb-400a-a552-c49387c0e0e3", 00:17:39.756 "is_configured": false, 00:17:39.756 "data_offset": 0, 00:17:39.756 "data_size": 65536 00:17:39.756 }, 00:17:39.756 { 00:17:39.756 "name": null, 00:17:39.756 "uuid": "ba9bcf66-0d3a-4d5a-b653-e459c8a2de94", 00:17:39.756 "is_configured": false, 00:17:39.756 "data_offset": 0, 00:17:39.756 "data_size": 65536 00:17:39.756 }, 00:17:39.756 { 00:17:39.756 "name": "BaseBdev3", 00:17:39.756 "uuid": "9fed79df-a70c-4def-9943-244a22714a3d", 00:17:39.756 "is_configured": true, 00:17:39.756 "data_offset": 0, 00:17:39.756 "data_size": 65536 00:17:39.756 }, 00:17:39.756 { 00:17:39.756 "name": "BaseBdev4", 00:17:39.756 "uuid": "508a0b9d-b63b-4b6a-a629-48f6499cc006", 00:17:39.756 "is_configured": true, 00:17:39.756 "data_offset": 0, 00:17:39.756 "data_size": 65536 00:17:39.756 } 00:17:39.756 ] 00:17:39.756 }' 00:17:39.756 18:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:39.756 18:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.323 18:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.323 18:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:40.580 18:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:40.580 18:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:40.838 [2024-07-15 18:32:26.250477] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:40.838 18:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:40.838 18:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:40.838 18:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:40.838 18:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:40.838 18:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:40.838 18:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:40.838 18:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:40.838 18:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:40.838 18:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:40.838 18:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:40.838 18:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.838 18:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.096 18:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:41.096 "name": "Existed_Raid", 00:17:41.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.096 "strip_size_kb": 64, 00:17:41.096 "state": "configuring", 00:17:41.096 "raid_level": "raid0", 00:17:41.096 "superblock": false, 00:17:41.096 "num_base_bdevs": 4, 00:17:41.096 "num_base_bdevs_discovered": 3, 00:17:41.096 "num_base_bdevs_operational": 4, 00:17:41.096 "base_bdevs_list": [ 00:17:41.096 { 00:17:41.096 "name": null, 00:17:41.096 "uuid": "064be989-40eb-400a-a552-c49387c0e0e3", 00:17:41.096 "is_configured": false, 00:17:41.096 "data_offset": 0, 00:17:41.096 "data_size": 65536 00:17:41.096 }, 00:17:41.096 { 00:17:41.096 "name": "BaseBdev2", 00:17:41.096 "uuid": "ba9bcf66-0d3a-4d5a-b653-e459c8a2de94", 00:17:41.096 "is_configured": true, 00:17:41.096 "data_offset": 0, 00:17:41.096 "data_size": 65536 00:17:41.096 }, 00:17:41.096 { 00:17:41.096 "name": "BaseBdev3", 00:17:41.096 "uuid": "9fed79df-a70c-4def-9943-244a22714a3d", 00:17:41.096 "is_configured": true, 00:17:41.096 "data_offset": 0, 00:17:41.096 "data_size": 65536 00:17:41.096 }, 00:17:41.096 { 00:17:41.096 "name": "BaseBdev4", 00:17:41.096 "uuid": "508a0b9d-b63b-4b6a-a629-48f6499cc006", 00:17:41.096 "is_configured": true, 00:17:41.096 "data_offset": 0, 00:17:41.096 "data_size": 65536 00:17:41.096 } 00:17:41.096 ] 00:17:41.096 }' 00:17:41.096 18:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:41.096 18:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.662 18:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.662 18:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:41.920 18:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:41.920 18:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.920 18:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:42.179 18:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 064be989-40eb-400a-a552-c49387c0e0e3 00:17:42.437 [2024-07-15 18:32:27.817919] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:42.437 [2024-07-15 18:32:27.817963] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x206bab0 00:17:42.438 [2024-07-15 18:32:27.817971] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:42.438 [2024-07-15 18:32:27.818166] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2072b70 00:17:42.438 [2024-07-15 18:32:27.818290] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x206bab0 00:17:42.438 [2024-07-15 18:32:27.818299] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x206bab0 00:17:42.438 [2024-07-15 18:32:27.818459] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.438 NewBaseBdev 00:17:42.438 18:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:42.438 18:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:17:42.438 18:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:42.438 18:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:42.438 18:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:42.438 18:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:42.438 18:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:42.696 18:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:42.955 [ 00:17:42.955 { 00:17:42.955 "name": "NewBaseBdev", 00:17:42.955 "aliases": [ 00:17:42.955 "064be989-40eb-400a-a552-c49387c0e0e3" 00:17:42.955 ], 00:17:42.955 "product_name": "Malloc disk", 00:17:42.955 "block_size": 512, 00:17:42.955 "num_blocks": 65536, 00:17:42.955 "uuid": "064be989-40eb-400a-a552-c49387c0e0e3", 00:17:42.955 "assigned_rate_limits": { 00:17:42.955 "rw_ios_per_sec": 0, 00:17:42.955 "rw_mbytes_per_sec": 0, 00:17:42.955 "r_mbytes_per_sec": 0, 00:17:42.955 "w_mbytes_per_sec": 0 00:17:42.955 }, 00:17:42.955 "claimed": true, 00:17:42.955 "claim_type": "exclusive_write", 00:17:42.955 "zoned": false, 00:17:42.955 "supported_io_types": { 00:17:42.955 "read": true, 00:17:42.955 "write": true, 00:17:42.955 "unmap": true, 00:17:42.955 "flush": true, 00:17:42.955 "reset": true, 00:17:42.955 "nvme_admin": false, 00:17:42.955 "nvme_io": false, 00:17:42.955 "nvme_io_md": false, 00:17:42.955 "write_zeroes": true, 00:17:42.955 "zcopy": true, 00:17:42.955 "get_zone_info": false, 00:17:42.955 "zone_management": false, 00:17:42.955 "zone_append": false, 00:17:42.955 "compare": false, 00:17:42.955 "compare_and_write": false, 00:17:42.955 "abort": true, 00:17:42.955 "seek_hole": false, 00:17:42.955 "seek_data": false, 00:17:42.955 "copy": true, 00:17:42.955 "nvme_iov_md": false 00:17:42.955 }, 00:17:42.955 "memory_domains": [ 00:17:42.955 { 00:17:42.955 "dma_device_id": "system", 00:17:42.955 "dma_device_type": 1 00:17:42.955 }, 00:17:42.955 { 00:17:42.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.955 "dma_device_type": 2 00:17:42.955 } 00:17:42.955 ], 00:17:42.955 "driver_specific": {} 00:17:42.955 } 00:17:42.955 ] 00:17:42.955 18:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:42.955 18:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:42.955 18:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:42.955 18:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:42.955 18:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:42.955 18:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:42.955 18:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:42.955 18:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:42.955 18:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:42.955 18:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:42.955 18:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:42.955 18:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.955 18:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.214 18:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:43.215 "name": "Existed_Raid", 00:17:43.215 "uuid": "7cf43fa0-906c-45a1-9ce0-23e27d5541e8", 00:17:43.215 "strip_size_kb": 64, 00:17:43.215 "state": "online", 00:17:43.215 "raid_level": "raid0", 00:17:43.215 "superblock": false, 00:17:43.215 "num_base_bdevs": 4, 00:17:43.215 "num_base_bdevs_discovered": 4, 00:17:43.215 "num_base_bdevs_operational": 4, 00:17:43.215 "base_bdevs_list": [ 00:17:43.215 { 00:17:43.215 "name": "NewBaseBdev", 00:17:43.215 "uuid": "064be989-40eb-400a-a552-c49387c0e0e3", 00:17:43.215 "is_configured": true, 00:17:43.215 "data_offset": 0, 00:17:43.215 "data_size": 65536 00:17:43.215 }, 00:17:43.215 { 00:17:43.215 "name": "BaseBdev2", 00:17:43.215 "uuid": "ba9bcf66-0d3a-4d5a-b653-e459c8a2de94", 00:17:43.215 "is_configured": true, 00:17:43.215 "data_offset": 0, 00:17:43.215 "data_size": 65536 00:17:43.215 }, 00:17:43.215 { 00:17:43.215 "name": "BaseBdev3", 00:17:43.215 "uuid": "9fed79df-a70c-4def-9943-244a22714a3d", 00:17:43.215 "is_configured": true, 00:17:43.215 "data_offset": 0, 00:17:43.215 "data_size": 65536 00:17:43.215 }, 00:17:43.215 { 00:17:43.215 "name": "BaseBdev4", 00:17:43.215 "uuid": "508a0b9d-b63b-4b6a-a629-48f6499cc006", 00:17:43.215 "is_configured": true, 00:17:43.215 "data_offset": 0, 00:17:43.215 "data_size": 65536 00:17:43.215 } 00:17:43.215 ] 00:17:43.215 }' 00:17:43.215 18:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:43.215 18:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.783 18:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:43.783 18:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:43.783 18:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:43.783 18:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:43.783 18:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:43.783 18:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:43.783 18:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:43.783 18:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:43.783 [2024-07-15 18:32:29.294264] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:43.783 18:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:43.783 "name": "Existed_Raid", 00:17:43.783 "aliases": [ 00:17:43.783 "7cf43fa0-906c-45a1-9ce0-23e27d5541e8" 00:17:43.783 ], 00:17:43.783 "product_name": "Raid Volume", 00:17:43.783 "block_size": 512, 00:17:43.783 "num_blocks": 262144, 00:17:43.783 "uuid": "7cf43fa0-906c-45a1-9ce0-23e27d5541e8", 00:17:43.783 "assigned_rate_limits": { 00:17:43.783 "rw_ios_per_sec": 0, 00:17:43.783 "rw_mbytes_per_sec": 0, 00:17:43.783 "r_mbytes_per_sec": 0, 00:17:43.783 "w_mbytes_per_sec": 0 00:17:43.783 }, 00:17:43.783 "claimed": false, 00:17:43.783 "zoned": false, 00:17:43.783 "supported_io_types": { 00:17:43.783 "read": true, 00:17:43.783 "write": true, 00:17:43.783 "unmap": true, 00:17:43.783 "flush": true, 00:17:43.783 "reset": true, 00:17:43.783 "nvme_admin": false, 00:17:43.783 "nvme_io": false, 00:17:43.783 "nvme_io_md": false, 00:17:43.783 "write_zeroes": true, 00:17:43.783 "zcopy": false, 00:17:43.783 "get_zone_info": false, 00:17:43.783 "zone_management": false, 00:17:43.783 "zone_append": false, 00:17:43.783 "compare": false, 00:17:43.783 "compare_and_write": false, 00:17:43.783 "abort": false, 00:17:43.783 "seek_hole": false, 00:17:43.783 "seek_data": false, 00:17:43.783 "copy": false, 00:17:43.783 "nvme_iov_md": false 00:17:43.783 }, 00:17:43.783 "memory_domains": [ 00:17:43.783 { 00:17:43.783 "dma_device_id": "system", 00:17:43.783 "dma_device_type": 1 00:17:43.783 }, 00:17:43.783 { 00:17:43.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.783 "dma_device_type": 2 00:17:43.783 }, 00:17:43.783 { 00:17:43.783 "dma_device_id": "system", 00:17:43.783 "dma_device_type": 1 00:17:43.783 }, 00:17:43.783 { 00:17:43.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.783 "dma_device_type": 2 00:17:43.783 }, 00:17:43.783 { 00:17:43.783 "dma_device_id": "system", 00:17:43.783 "dma_device_type": 1 00:17:43.783 }, 00:17:43.783 { 00:17:43.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.783 "dma_device_type": 2 00:17:43.783 }, 00:17:43.783 { 00:17:43.783 "dma_device_id": "system", 00:17:43.783 "dma_device_type": 1 00:17:43.783 }, 00:17:43.783 { 00:17:43.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.783 "dma_device_type": 2 00:17:43.783 } 00:17:43.783 ], 00:17:43.783 "driver_specific": { 00:17:43.783 "raid": { 00:17:43.783 "uuid": "7cf43fa0-906c-45a1-9ce0-23e27d5541e8", 00:17:43.783 "strip_size_kb": 64, 00:17:43.783 "state": "online", 00:17:43.783 "raid_level": "raid0", 00:17:43.783 "superblock": false, 00:17:43.783 "num_base_bdevs": 4, 00:17:43.783 "num_base_bdevs_discovered": 4, 00:17:43.783 "num_base_bdevs_operational": 4, 00:17:43.783 "base_bdevs_list": [ 00:17:43.783 { 00:17:43.783 "name": "NewBaseBdev", 00:17:43.783 "uuid": "064be989-40eb-400a-a552-c49387c0e0e3", 00:17:43.783 "is_configured": true, 00:17:43.783 "data_offset": 0, 00:17:43.783 "data_size": 65536 00:17:43.783 }, 00:17:43.783 { 00:17:43.783 "name": "BaseBdev2", 00:17:43.783 "uuid": "ba9bcf66-0d3a-4d5a-b653-e459c8a2de94", 00:17:43.783 "is_configured": true, 00:17:43.783 "data_offset": 0, 00:17:43.783 "data_size": 65536 00:17:43.783 }, 00:17:43.783 { 00:17:43.783 "name": "BaseBdev3", 00:17:43.783 "uuid": "9fed79df-a70c-4def-9943-244a22714a3d", 00:17:43.783 "is_configured": true, 00:17:43.783 "data_offset": 0, 00:17:43.783 "data_size": 65536 00:17:43.783 }, 00:17:43.783 { 00:17:43.783 "name": "BaseBdev4", 00:17:43.783 "uuid": "508a0b9d-b63b-4b6a-a629-48f6499cc006", 00:17:43.783 "is_configured": true, 00:17:43.783 "data_offset": 0, 00:17:43.783 "data_size": 65536 00:17:43.783 } 00:17:43.783 ] 00:17:43.783 } 00:17:43.783 } 00:17:43.783 }' 00:17:43.783 18:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:44.042 18:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:44.042 BaseBdev2 00:17:44.042 BaseBdev3 00:17:44.042 BaseBdev4' 00:17:44.042 18:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:44.042 18:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:44.042 18:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:44.301 18:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:44.301 "name": "NewBaseBdev", 00:17:44.301 "aliases": [ 00:17:44.301 "064be989-40eb-400a-a552-c49387c0e0e3" 00:17:44.301 ], 00:17:44.301 "product_name": "Malloc disk", 00:17:44.301 "block_size": 512, 00:17:44.301 "num_blocks": 65536, 00:17:44.301 "uuid": "064be989-40eb-400a-a552-c49387c0e0e3", 00:17:44.301 "assigned_rate_limits": { 00:17:44.301 "rw_ios_per_sec": 0, 00:17:44.301 "rw_mbytes_per_sec": 0, 00:17:44.301 "r_mbytes_per_sec": 0, 00:17:44.301 "w_mbytes_per_sec": 0 00:17:44.301 }, 00:17:44.301 "claimed": true, 00:17:44.301 "claim_type": "exclusive_write", 00:17:44.301 "zoned": false, 00:17:44.301 "supported_io_types": { 00:17:44.301 "read": true, 00:17:44.301 "write": true, 00:17:44.301 "unmap": true, 00:17:44.301 "flush": true, 00:17:44.301 "reset": true, 00:17:44.301 "nvme_admin": false, 00:17:44.301 "nvme_io": false, 00:17:44.301 "nvme_io_md": false, 00:17:44.301 "write_zeroes": true, 00:17:44.301 "zcopy": true, 00:17:44.301 "get_zone_info": false, 00:17:44.301 "zone_management": false, 00:17:44.301 "zone_append": false, 00:17:44.301 "compare": false, 00:17:44.301 "compare_and_write": false, 00:17:44.301 "abort": true, 00:17:44.301 "seek_hole": false, 00:17:44.301 "seek_data": false, 00:17:44.301 "copy": true, 00:17:44.301 "nvme_iov_md": false 00:17:44.301 }, 00:17:44.301 "memory_domains": [ 00:17:44.301 { 00:17:44.301 "dma_device_id": "system", 00:17:44.301 "dma_device_type": 1 00:17:44.301 }, 00:17:44.301 { 00:17:44.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.301 "dma_device_type": 2 00:17:44.301 } 00:17:44.301 ], 00:17:44.301 "driver_specific": {} 00:17:44.301 }' 00:17:44.301 18:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:44.301 18:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:44.301 18:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:44.301 18:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:44.301 18:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:44.301 18:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:44.301 18:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:44.559 18:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:44.560 18:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:44.560 18:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:44.560 18:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:44.560 18:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:44.560 18:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:44.560 18:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:44.560 18:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:44.818 18:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:44.818 "name": "BaseBdev2", 00:17:44.818 "aliases": [ 00:17:44.818 "ba9bcf66-0d3a-4d5a-b653-e459c8a2de94" 00:17:44.818 ], 00:17:44.818 "product_name": "Malloc disk", 00:17:44.818 "block_size": 512, 00:17:44.818 "num_blocks": 65536, 00:17:44.818 "uuid": "ba9bcf66-0d3a-4d5a-b653-e459c8a2de94", 00:17:44.818 "assigned_rate_limits": { 00:17:44.818 "rw_ios_per_sec": 0, 00:17:44.818 "rw_mbytes_per_sec": 0, 00:17:44.818 "r_mbytes_per_sec": 0, 00:17:44.818 "w_mbytes_per_sec": 0 00:17:44.818 }, 00:17:44.818 "claimed": true, 00:17:44.818 "claim_type": "exclusive_write", 00:17:44.818 "zoned": false, 00:17:44.818 "supported_io_types": { 00:17:44.818 "read": true, 00:17:44.818 "write": true, 00:17:44.818 "unmap": true, 00:17:44.818 "flush": true, 00:17:44.818 "reset": true, 00:17:44.818 "nvme_admin": false, 00:17:44.818 "nvme_io": false, 00:17:44.818 "nvme_io_md": false, 00:17:44.818 "write_zeroes": true, 00:17:44.818 "zcopy": true, 00:17:44.818 "get_zone_info": false, 00:17:44.818 "zone_management": false, 00:17:44.818 "zone_append": false, 00:17:44.818 "compare": false, 00:17:44.818 "compare_and_write": false, 00:17:44.818 "abort": true, 00:17:44.818 "seek_hole": false, 00:17:44.818 "seek_data": false, 00:17:44.818 "copy": true, 00:17:44.818 "nvme_iov_md": false 00:17:44.818 }, 00:17:44.818 "memory_domains": [ 00:17:44.818 { 00:17:44.818 "dma_device_id": "system", 00:17:44.818 "dma_device_type": 1 00:17:44.818 }, 00:17:44.818 { 00:17:44.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.818 "dma_device_type": 2 00:17:44.818 } 00:17:44.818 ], 00:17:44.818 "driver_specific": {} 00:17:44.818 }' 00:17:44.818 18:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:44.818 18:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:45.077 18:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:45.077 18:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:45.077 18:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:45.077 18:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:45.077 18:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:45.077 18:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:45.077 18:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:45.077 18:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:45.077 18:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:45.335 18:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:45.335 18:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:45.335 18:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:45.335 18:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:45.594 18:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:45.594 "name": "BaseBdev3", 00:17:45.594 "aliases": [ 00:17:45.594 "9fed79df-a70c-4def-9943-244a22714a3d" 00:17:45.594 ], 00:17:45.594 "product_name": "Malloc disk", 00:17:45.594 "block_size": 512, 00:17:45.594 "num_blocks": 65536, 00:17:45.594 "uuid": "9fed79df-a70c-4def-9943-244a22714a3d", 00:17:45.594 "assigned_rate_limits": { 00:17:45.594 "rw_ios_per_sec": 0, 00:17:45.594 "rw_mbytes_per_sec": 0, 00:17:45.594 "r_mbytes_per_sec": 0, 00:17:45.594 "w_mbytes_per_sec": 0 00:17:45.594 }, 00:17:45.594 "claimed": true, 00:17:45.594 "claim_type": "exclusive_write", 00:17:45.594 "zoned": false, 00:17:45.594 "supported_io_types": { 00:17:45.594 "read": true, 00:17:45.594 "write": true, 00:17:45.594 "unmap": true, 00:17:45.594 "flush": true, 00:17:45.594 "reset": true, 00:17:45.594 "nvme_admin": false, 00:17:45.594 "nvme_io": false, 00:17:45.594 "nvme_io_md": false, 00:17:45.594 "write_zeroes": true, 00:17:45.594 "zcopy": true, 00:17:45.594 "get_zone_info": false, 00:17:45.594 "zone_management": false, 00:17:45.594 "zone_append": false, 00:17:45.594 "compare": false, 00:17:45.594 "compare_and_write": false, 00:17:45.594 "abort": true, 00:17:45.594 "seek_hole": false, 00:17:45.594 "seek_data": false, 00:17:45.594 "copy": true, 00:17:45.594 "nvme_iov_md": false 00:17:45.594 }, 00:17:45.594 "memory_domains": [ 00:17:45.594 { 00:17:45.594 "dma_device_id": "system", 00:17:45.594 "dma_device_type": 1 00:17:45.594 }, 00:17:45.594 { 00:17:45.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.594 "dma_device_type": 2 00:17:45.594 } 00:17:45.594 ], 00:17:45.594 "driver_specific": {} 00:17:45.594 }' 00:17:45.594 18:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:45.594 18:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:45.594 18:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:45.594 18:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:45.594 18:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:45.594 18:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:45.594 18:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:45.594 18:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:45.853 18:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:45.853 18:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:45.853 18:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:45.853 18:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:45.853 18:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:45.853 18:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:17:45.853 18:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:46.112 18:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:46.112 "name": "BaseBdev4", 00:17:46.112 "aliases": [ 00:17:46.112 "508a0b9d-b63b-4b6a-a629-48f6499cc006" 00:17:46.112 ], 00:17:46.112 "product_name": "Malloc disk", 00:17:46.112 "block_size": 512, 00:17:46.112 "num_blocks": 65536, 00:17:46.112 "uuid": "508a0b9d-b63b-4b6a-a629-48f6499cc006", 00:17:46.112 "assigned_rate_limits": { 00:17:46.112 "rw_ios_per_sec": 0, 00:17:46.112 "rw_mbytes_per_sec": 0, 00:17:46.112 "r_mbytes_per_sec": 0, 00:17:46.112 "w_mbytes_per_sec": 0 00:17:46.112 }, 00:17:46.112 "claimed": true, 00:17:46.112 "claim_type": "exclusive_write", 00:17:46.112 "zoned": false, 00:17:46.112 "supported_io_types": { 00:17:46.112 "read": true, 00:17:46.112 "write": true, 00:17:46.112 "unmap": true, 00:17:46.112 "flush": true, 00:17:46.112 "reset": true, 00:17:46.112 "nvme_admin": false, 00:17:46.112 "nvme_io": false, 00:17:46.112 "nvme_io_md": false, 00:17:46.112 "write_zeroes": true, 00:17:46.112 "zcopy": true, 00:17:46.112 "get_zone_info": false, 00:17:46.112 "zone_management": false, 00:17:46.112 "zone_append": false, 00:17:46.112 "compare": false, 00:17:46.112 "compare_and_write": false, 00:17:46.112 "abort": true, 00:17:46.112 "seek_hole": false, 00:17:46.112 "seek_data": false, 00:17:46.112 "copy": true, 00:17:46.112 "nvme_iov_md": false 00:17:46.112 }, 00:17:46.112 "memory_domains": [ 00:17:46.112 { 00:17:46.112 "dma_device_id": "system", 00:17:46.112 "dma_device_type": 1 00:17:46.112 }, 00:17:46.112 { 00:17:46.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.112 "dma_device_type": 2 00:17:46.112 } 00:17:46.112 ], 00:17:46.112 "driver_specific": {} 00:17:46.112 }' 00:17:46.112 18:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:46.112 18:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:46.112 18:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:46.112 18:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:46.375 18:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:46.375 18:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:46.375 18:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:46.375 18:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:46.375 18:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:46.375 18:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:46.375 18:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:46.375 18:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:46.375 18:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:46.685 [2024-07-15 18:32:32.045350] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:46.685 [2024-07-15 18:32:32.045376] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:46.685 [2024-07-15 18:32:32.045423] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:46.685 [2024-07-15 18:32:32.045480] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:46.685 [2024-07-15 18:32:32.045490] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x206bab0 name Existed_Raid, state offline 00:17:46.685 18:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2831455 00:17:46.685 18:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2831455 ']' 00:17:46.685 18:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2831455 00:17:46.685 18:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:17:46.685 18:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:46.685 18:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2831455 00:17:46.685 18:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:46.685 18:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:46.685 18:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2831455' 00:17:46.685 killing process with pid 2831455 00:17:46.685 18:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2831455 00:17:46.685 [2024-07-15 18:32:32.110726] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:46.685 18:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2831455 00:17:46.685 [2024-07-15 18:32:32.146054] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:17:46.944 00:17:46.944 real 0m38.131s 00:17:46.944 user 1m11.921s 00:17:46.944 sys 0m4.939s 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.944 ************************************ 00:17:46.944 END TEST raid_state_function_test 00:17:46.944 ************************************ 00:17:46.944 18:32:32 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:46.944 18:32:32 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:17:46.944 18:32:32 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:46.944 18:32:32 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:46.944 18:32:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:46.944 ************************************ 00:17:46.944 START TEST raid_state_function_test_sb 00:17:46.944 ************************************ 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 true 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2837871 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2837871' 00:17:46.944 Process raid pid: 2837871 00:17:46.944 18:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:46.945 18:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2837871 /var/tmp/spdk-raid.sock 00:17:46.945 18:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2837871 ']' 00:17:46.945 18:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:46.945 18:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:46.945 18:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:46.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:46.945 18:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:46.945 18:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.945 [2024-07-15 18:32:32.484064] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:17:46.945 [2024-07-15 18:32:32.484173] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.203 [2024-07-15 18:32:32.624196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.203 [2024-07-15 18:32:32.718226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.461 [2024-07-15 18:32:32.782988] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:47.461 [2024-07-15 18:32:32.783015] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:48.029 18:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:48.029 18:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:17:48.029 18:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:48.287 [2024-07-15 18:32:33.637815] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:48.287 [2024-07-15 18:32:33.637856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:48.287 [2024-07-15 18:32:33.637865] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:48.287 [2024-07-15 18:32:33.637874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:48.287 [2024-07-15 18:32:33.637880] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:48.287 [2024-07-15 18:32:33.637888] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:48.287 [2024-07-15 18:32:33.637895] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:48.287 [2024-07-15 18:32:33.637902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:48.287 18:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:48.287 18:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:48.287 18:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:48.287 18:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:48.287 18:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:48.287 18:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:48.287 18:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:48.287 18:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:48.287 18:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:48.287 18:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:48.287 18:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.287 18:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.550 18:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:48.550 "name": "Existed_Raid", 00:17:48.550 "uuid": "8b07843d-f2e8-4fed-a4c4-aaf22dbd545d", 00:17:48.550 "strip_size_kb": 64, 00:17:48.550 "state": "configuring", 00:17:48.550 "raid_level": "raid0", 00:17:48.550 "superblock": true, 00:17:48.550 "num_base_bdevs": 4, 00:17:48.550 "num_base_bdevs_discovered": 0, 00:17:48.550 "num_base_bdevs_operational": 4, 00:17:48.550 "base_bdevs_list": [ 00:17:48.550 { 00:17:48.550 "name": "BaseBdev1", 00:17:48.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.550 "is_configured": false, 00:17:48.550 "data_offset": 0, 00:17:48.550 "data_size": 0 00:17:48.550 }, 00:17:48.550 { 00:17:48.550 "name": "BaseBdev2", 00:17:48.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.550 "is_configured": false, 00:17:48.550 "data_offset": 0, 00:17:48.550 "data_size": 0 00:17:48.550 }, 00:17:48.550 { 00:17:48.550 "name": "BaseBdev3", 00:17:48.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.550 "is_configured": false, 00:17:48.550 "data_offset": 0, 00:17:48.550 "data_size": 0 00:17:48.550 }, 00:17:48.550 { 00:17:48.550 "name": "BaseBdev4", 00:17:48.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.550 "is_configured": false, 00:17:48.550 "data_offset": 0, 00:17:48.550 "data_size": 0 00:17:48.550 } 00:17:48.550 ] 00:17:48.550 }' 00:17:48.550 18:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:48.550 18:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.118 18:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:49.377 [2024-07-15 18:32:34.780727] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:49.377 [2024-07-15 18:32:34.780757] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd49bc0 name Existed_Raid, state configuring 00:17:49.377 18:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:49.636 [2024-07-15 18:32:35.025407] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:49.636 [2024-07-15 18:32:35.025431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:49.636 [2024-07-15 18:32:35.025439] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:49.636 [2024-07-15 18:32:35.025447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:49.636 [2024-07-15 18:32:35.025454] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:49.636 [2024-07-15 18:32:35.025462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:49.636 [2024-07-15 18:32:35.025469] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:49.636 [2024-07-15 18:32:35.025476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:49.636 18:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:49.895 [2024-07-15 18:32:35.287624] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:49.895 BaseBdev1 00:17:49.895 18:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:49.895 18:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:49.895 18:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:49.895 18:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:49.895 18:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:49.895 18:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:49.895 18:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:50.164 18:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:50.427 [ 00:17:50.427 { 00:17:50.427 "name": "BaseBdev1", 00:17:50.427 "aliases": [ 00:17:50.427 "1ea8ab38-cad0-4845-af8d-15bf5089cb3a" 00:17:50.427 ], 00:17:50.427 "product_name": "Malloc disk", 00:17:50.427 "block_size": 512, 00:17:50.427 "num_blocks": 65536, 00:17:50.427 "uuid": "1ea8ab38-cad0-4845-af8d-15bf5089cb3a", 00:17:50.427 "assigned_rate_limits": { 00:17:50.427 "rw_ios_per_sec": 0, 00:17:50.427 "rw_mbytes_per_sec": 0, 00:17:50.427 "r_mbytes_per_sec": 0, 00:17:50.427 "w_mbytes_per_sec": 0 00:17:50.427 }, 00:17:50.427 "claimed": true, 00:17:50.427 "claim_type": "exclusive_write", 00:17:50.427 "zoned": false, 00:17:50.427 "supported_io_types": { 00:17:50.427 "read": true, 00:17:50.427 "write": true, 00:17:50.427 "unmap": true, 00:17:50.427 "flush": true, 00:17:50.427 "reset": true, 00:17:50.427 "nvme_admin": false, 00:17:50.427 "nvme_io": false, 00:17:50.427 "nvme_io_md": false, 00:17:50.427 "write_zeroes": true, 00:17:50.427 "zcopy": true, 00:17:50.427 "get_zone_info": false, 00:17:50.427 "zone_management": false, 00:17:50.427 "zone_append": false, 00:17:50.427 "compare": false, 00:17:50.427 "compare_and_write": false, 00:17:50.427 "abort": true, 00:17:50.427 "seek_hole": false, 00:17:50.427 "seek_data": false, 00:17:50.427 "copy": true, 00:17:50.427 "nvme_iov_md": false 00:17:50.427 }, 00:17:50.427 "memory_domains": [ 00:17:50.427 { 00:17:50.427 "dma_device_id": "system", 00:17:50.427 "dma_device_type": 1 00:17:50.427 }, 00:17:50.427 { 00:17:50.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.427 "dma_device_type": 2 00:17:50.427 } 00:17:50.427 ], 00:17:50.427 "driver_specific": {} 00:17:50.427 } 00:17:50.427 ] 00:17:50.427 18:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:50.427 18:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:50.427 18:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:50.427 18:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:50.427 18:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:50.427 18:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:50.427 18:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:50.427 18:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:50.427 18:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:50.427 18:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:50.427 18:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:50.427 18:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.427 18:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.685 18:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:50.685 "name": "Existed_Raid", 00:17:50.685 "uuid": "e8d38ae2-d6fe-410b-96dd-2146ce70dcd0", 00:17:50.685 "strip_size_kb": 64, 00:17:50.685 "state": "configuring", 00:17:50.685 "raid_level": "raid0", 00:17:50.685 "superblock": true, 00:17:50.685 "num_base_bdevs": 4, 00:17:50.685 "num_base_bdevs_discovered": 1, 00:17:50.685 "num_base_bdevs_operational": 4, 00:17:50.685 "base_bdevs_list": [ 00:17:50.685 { 00:17:50.685 "name": "BaseBdev1", 00:17:50.685 "uuid": "1ea8ab38-cad0-4845-af8d-15bf5089cb3a", 00:17:50.685 "is_configured": true, 00:17:50.685 "data_offset": 2048, 00:17:50.685 "data_size": 63488 00:17:50.685 }, 00:17:50.685 { 00:17:50.685 "name": "BaseBdev2", 00:17:50.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.685 "is_configured": false, 00:17:50.685 "data_offset": 0, 00:17:50.685 "data_size": 0 00:17:50.685 }, 00:17:50.685 { 00:17:50.685 "name": "BaseBdev3", 00:17:50.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.685 "is_configured": false, 00:17:50.685 "data_offset": 0, 00:17:50.685 "data_size": 0 00:17:50.685 }, 00:17:50.685 { 00:17:50.685 "name": "BaseBdev4", 00:17:50.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.685 "is_configured": false, 00:17:50.685 "data_offset": 0, 00:17:50.685 "data_size": 0 00:17:50.685 } 00:17:50.685 ] 00:17:50.685 }' 00:17:50.685 18:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:50.685 18:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.251 18:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:51.509 [2024-07-15 18:32:36.940244] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:51.509 [2024-07-15 18:32:36.940283] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd49430 name Existed_Raid, state configuring 00:17:51.509 18:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:51.768 [2024-07-15 18:32:37.196985] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:51.768 [2024-07-15 18:32:37.198508] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:51.768 [2024-07-15 18:32:37.198539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:51.768 [2024-07-15 18:32:37.198548] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:51.768 [2024-07-15 18:32:37.198556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:51.768 [2024-07-15 18:32:37.198563] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:51.768 [2024-07-15 18:32:37.198571] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:51.768 18:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:51.768 18:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:51.768 18:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:51.768 18:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:51.768 18:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:51.768 18:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:51.768 18:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:51.768 18:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:51.768 18:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:51.768 18:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:51.768 18:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:51.768 18:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:51.768 18:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.768 18:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.026 18:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:52.026 "name": "Existed_Raid", 00:17:52.026 "uuid": "d6ab8408-cf20-481a-87f1-6f80aa923cc1", 00:17:52.026 "strip_size_kb": 64, 00:17:52.026 "state": "configuring", 00:17:52.026 "raid_level": "raid0", 00:17:52.026 "superblock": true, 00:17:52.026 "num_base_bdevs": 4, 00:17:52.026 "num_base_bdevs_discovered": 1, 00:17:52.026 "num_base_bdevs_operational": 4, 00:17:52.026 "base_bdevs_list": [ 00:17:52.026 { 00:17:52.026 "name": "BaseBdev1", 00:17:52.026 "uuid": "1ea8ab38-cad0-4845-af8d-15bf5089cb3a", 00:17:52.026 "is_configured": true, 00:17:52.026 "data_offset": 2048, 00:17:52.026 "data_size": 63488 00:17:52.026 }, 00:17:52.026 { 00:17:52.026 "name": "BaseBdev2", 00:17:52.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.026 "is_configured": false, 00:17:52.026 "data_offset": 0, 00:17:52.027 "data_size": 0 00:17:52.027 }, 00:17:52.027 { 00:17:52.027 "name": "BaseBdev3", 00:17:52.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.027 "is_configured": false, 00:17:52.027 "data_offset": 0, 00:17:52.027 "data_size": 0 00:17:52.027 }, 00:17:52.027 { 00:17:52.027 "name": "BaseBdev4", 00:17:52.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.027 "is_configured": false, 00:17:52.027 "data_offset": 0, 00:17:52.027 "data_size": 0 00:17:52.027 } 00:17:52.027 ] 00:17:52.027 }' 00:17:52.027 18:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:52.027 18:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.593 18:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:52.853 [2024-07-15 18:32:38.307204] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:52.853 BaseBdev2 00:17:52.853 18:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:52.853 18:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:52.853 18:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:52.853 18:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:52.853 18:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:52.853 18:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:52.853 18:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:53.112 18:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:53.371 [ 00:17:53.371 { 00:17:53.371 "name": "BaseBdev2", 00:17:53.371 "aliases": [ 00:17:53.371 "48eb7853-5e14-45d3-8d91-e0eaf3d7581c" 00:17:53.371 ], 00:17:53.371 "product_name": "Malloc disk", 00:17:53.371 "block_size": 512, 00:17:53.371 "num_blocks": 65536, 00:17:53.371 "uuid": "48eb7853-5e14-45d3-8d91-e0eaf3d7581c", 00:17:53.371 "assigned_rate_limits": { 00:17:53.371 "rw_ios_per_sec": 0, 00:17:53.371 "rw_mbytes_per_sec": 0, 00:17:53.371 "r_mbytes_per_sec": 0, 00:17:53.371 "w_mbytes_per_sec": 0 00:17:53.371 }, 00:17:53.371 "claimed": true, 00:17:53.371 "claim_type": "exclusive_write", 00:17:53.371 "zoned": false, 00:17:53.371 "supported_io_types": { 00:17:53.371 "read": true, 00:17:53.371 "write": true, 00:17:53.371 "unmap": true, 00:17:53.371 "flush": true, 00:17:53.371 "reset": true, 00:17:53.371 "nvme_admin": false, 00:17:53.371 "nvme_io": false, 00:17:53.371 "nvme_io_md": false, 00:17:53.371 "write_zeroes": true, 00:17:53.371 "zcopy": true, 00:17:53.371 "get_zone_info": false, 00:17:53.371 "zone_management": false, 00:17:53.371 "zone_append": false, 00:17:53.371 "compare": false, 00:17:53.371 "compare_and_write": false, 00:17:53.371 "abort": true, 00:17:53.371 "seek_hole": false, 00:17:53.371 "seek_data": false, 00:17:53.371 "copy": true, 00:17:53.371 "nvme_iov_md": false 00:17:53.371 }, 00:17:53.371 "memory_domains": [ 00:17:53.371 { 00:17:53.371 "dma_device_id": "system", 00:17:53.371 "dma_device_type": 1 00:17:53.371 }, 00:17:53.371 { 00:17:53.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.371 "dma_device_type": 2 00:17:53.371 } 00:17:53.371 ], 00:17:53.371 "driver_specific": {} 00:17:53.371 } 00:17:53.371 ] 00:17:53.371 18:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:53.371 18:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:53.371 18:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:53.371 18:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:53.371 18:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:53.371 18:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:53.371 18:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:53.371 18:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:53.371 18:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:53.371 18:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:53.371 18:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:53.371 18:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:53.371 18:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:53.371 18:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.371 18:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.630 18:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:53.630 "name": "Existed_Raid", 00:17:53.630 "uuid": "d6ab8408-cf20-481a-87f1-6f80aa923cc1", 00:17:53.630 "strip_size_kb": 64, 00:17:53.630 "state": "configuring", 00:17:53.630 "raid_level": "raid0", 00:17:53.630 "superblock": true, 00:17:53.630 "num_base_bdevs": 4, 00:17:53.630 "num_base_bdevs_discovered": 2, 00:17:53.630 "num_base_bdevs_operational": 4, 00:17:53.630 "base_bdevs_list": [ 00:17:53.630 { 00:17:53.630 "name": "BaseBdev1", 00:17:53.630 "uuid": "1ea8ab38-cad0-4845-af8d-15bf5089cb3a", 00:17:53.630 "is_configured": true, 00:17:53.630 "data_offset": 2048, 00:17:53.630 "data_size": 63488 00:17:53.630 }, 00:17:53.630 { 00:17:53.630 "name": "BaseBdev2", 00:17:53.630 "uuid": "48eb7853-5e14-45d3-8d91-e0eaf3d7581c", 00:17:53.630 "is_configured": true, 00:17:53.630 "data_offset": 2048, 00:17:53.630 "data_size": 63488 00:17:53.630 }, 00:17:53.630 { 00:17:53.630 "name": "BaseBdev3", 00:17:53.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.630 "is_configured": false, 00:17:53.630 "data_offset": 0, 00:17:53.630 "data_size": 0 00:17:53.630 }, 00:17:53.630 { 00:17:53.630 "name": "BaseBdev4", 00:17:53.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.630 "is_configured": false, 00:17:53.630 "data_offset": 0, 00:17:53.630 "data_size": 0 00:17:53.630 } 00:17:53.630 ] 00:17:53.630 }' 00:17:53.630 18:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:53.630 18:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.198 18:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:54.198 [2024-07-15 18:32:39.625932] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:54.198 BaseBdev3 00:17:54.198 18:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:54.198 18:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:54.198 18:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:54.198 18:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:54.198 18:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:54.198 18:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:54.198 18:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:54.455 18:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:54.713 [ 00:17:54.713 { 00:17:54.713 "name": "BaseBdev3", 00:17:54.713 "aliases": [ 00:17:54.713 "29c506a2-43a0-4efb-8d88-3d544828ead3" 00:17:54.713 ], 00:17:54.713 "product_name": "Malloc disk", 00:17:54.713 "block_size": 512, 00:17:54.713 "num_blocks": 65536, 00:17:54.713 "uuid": "29c506a2-43a0-4efb-8d88-3d544828ead3", 00:17:54.713 "assigned_rate_limits": { 00:17:54.713 "rw_ios_per_sec": 0, 00:17:54.713 "rw_mbytes_per_sec": 0, 00:17:54.713 "r_mbytes_per_sec": 0, 00:17:54.713 "w_mbytes_per_sec": 0 00:17:54.713 }, 00:17:54.713 "claimed": true, 00:17:54.713 "claim_type": "exclusive_write", 00:17:54.713 "zoned": false, 00:17:54.713 "supported_io_types": { 00:17:54.713 "read": true, 00:17:54.713 "write": true, 00:17:54.713 "unmap": true, 00:17:54.713 "flush": true, 00:17:54.713 "reset": true, 00:17:54.713 "nvme_admin": false, 00:17:54.713 "nvme_io": false, 00:17:54.713 "nvme_io_md": false, 00:17:54.713 "write_zeroes": true, 00:17:54.713 "zcopy": true, 00:17:54.713 "get_zone_info": false, 00:17:54.713 "zone_management": false, 00:17:54.713 "zone_append": false, 00:17:54.713 "compare": false, 00:17:54.713 "compare_and_write": false, 00:17:54.713 "abort": true, 00:17:54.713 "seek_hole": false, 00:17:54.713 "seek_data": false, 00:17:54.713 "copy": true, 00:17:54.713 "nvme_iov_md": false 00:17:54.713 }, 00:17:54.713 "memory_domains": [ 00:17:54.713 { 00:17:54.713 "dma_device_id": "system", 00:17:54.713 "dma_device_type": 1 00:17:54.713 }, 00:17:54.713 { 00:17:54.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.713 "dma_device_type": 2 00:17:54.713 } 00:17:54.713 ], 00:17:54.713 "driver_specific": {} 00:17:54.713 } 00:17:54.713 ] 00:17:54.713 18:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:54.713 18:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:54.713 18:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:54.713 18:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:54.713 18:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:54.713 18:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:54.713 18:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:54.713 18:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:54.713 18:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:54.713 18:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:54.713 18:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:54.713 18:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:54.713 18:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:54.713 18:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.713 18:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.970 18:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:54.970 "name": "Existed_Raid", 00:17:54.970 "uuid": "d6ab8408-cf20-481a-87f1-6f80aa923cc1", 00:17:54.970 "strip_size_kb": 64, 00:17:54.970 "state": "configuring", 00:17:54.970 "raid_level": "raid0", 00:17:54.970 "superblock": true, 00:17:54.970 "num_base_bdevs": 4, 00:17:54.970 "num_base_bdevs_discovered": 3, 00:17:54.970 "num_base_bdevs_operational": 4, 00:17:54.970 "base_bdevs_list": [ 00:17:54.970 { 00:17:54.970 "name": "BaseBdev1", 00:17:54.970 "uuid": "1ea8ab38-cad0-4845-af8d-15bf5089cb3a", 00:17:54.970 "is_configured": true, 00:17:54.970 "data_offset": 2048, 00:17:54.970 "data_size": 63488 00:17:54.970 }, 00:17:54.970 { 00:17:54.970 "name": "BaseBdev2", 00:17:54.970 "uuid": "48eb7853-5e14-45d3-8d91-e0eaf3d7581c", 00:17:54.970 "is_configured": true, 00:17:54.970 "data_offset": 2048, 00:17:54.970 "data_size": 63488 00:17:54.970 }, 00:17:54.970 { 00:17:54.970 "name": "BaseBdev3", 00:17:54.970 "uuid": "29c506a2-43a0-4efb-8d88-3d544828ead3", 00:17:54.970 "is_configured": true, 00:17:54.970 "data_offset": 2048, 00:17:54.970 "data_size": 63488 00:17:54.970 }, 00:17:54.970 { 00:17:54.970 "name": "BaseBdev4", 00:17:54.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.970 "is_configured": false, 00:17:54.970 "data_offset": 0, 00:17:54.970 "data_size": 0 00:17:54.970 } 00:17:54.970 ] 00:17:54.970 }' 00:17:54.970 18:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:54.970 18:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.535 18:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:55.794 [2024-07-15 18:32:41.209720] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:55.794 [2024-07-15 18:32:41.209883] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xd4a490 00:17:55.794 [2024-07-15 18:32:41.209896] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:55.794 [2024-07-15 18:32:41.210090] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd362d0 00:17:55.794 [2024-07-15 18:32:41.210219] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd4a490 00:17:55.794 [2024-07-15 18:32:41.210228] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xd4a490 00:17:55.794 [2024-07-15 18:32:41.210325] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:55.794 BaseBdev4 00:17:55.794 18:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:17:55.794 18:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:17:55.794 18:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:55.794 18:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:55.794 18:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:55.794 18:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:55.794 18:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:56.052 18:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:56.310 [ 00:17:56.310 { 00:17:56.310 "name": "BaseBdev4", 00:17:56.310 "aliases": [ 00:17:56.310 "47bb0413-910f-4d14-b62e-66340666d85b" 00:17:56.310 ], 00:17:56.310 "product_name": "Malloc disk", 00:17:56.310 "block_size": 512, 00:17:56.310 "num_blocks": 65536, 00:17:56.310 "uuid": "47bb0413-910f-4d14-b62e-66340666d85b", 00:17:56.310 "assigned_rate_limits": { 00:17:56.310 "rw_ios_per_sec": 0, 00:17:56.310 "rw_mbytes_per_sec": 0, 00:17:56.310 "r_mbytes_per_sec": 0, 00:17:56.310 "w_mbytes_per_sec": 0 00:17:56.310 }, 00:17:56.310 "claimed": true, 00:17:56.310 "claim_type": "exclusive_write", 00:17:56.310 "zoned": false, 00:17:56.310 "supported_io_types": { 00:17:56.310 "read": true, 00:17:56.310 "write": true, 00:17:56.310 "unmap": true, 00:17:56.310 "flush": true, 00:17:56.310 "reset": true, 00:17:56.310 "nvme_admin": false, 00:17:56.310 "nvme_io": false, 00:17:56.310 "nvme_io_md": false, 00:17:56.310 "write_zeroes": true, 00:17:56.310 "zcopy": true, 00:17:56.310 "get_zone_info": false, 00:17:56.310 "zone_management": false, 00:17:56.310 "zone_append": false, 00:17:56.310 "compare": false, 00:17:56.310 "compare_and_write": false, 00:17:56.310 "abort": true, 00:17:56.310 "seek_hole": false, 00:17:56.310 "seek_data": false, 00:17:56.310 "copy": true, 00:17:56.310 "nvme_iov_md": false 00:17:56.310 }, 00:17:56.310 "memory_domains": [ 00:17:56.310 { 00:17:56.310 "dma_device_id": "system", 00:17:56.310 "dma_device_type": 1 00:17:56.310 }, 00:17:56.310 { 00:17:56.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.310 "dma_device_type": 2 00:17:56.310 } 00:17:56.310 ], 00:17:56.310 "driver_specific": {} 00:17:56.310 } 00:17:56.310 ] 00:17:56.310 18:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:56.311 18:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:56.311 18:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:56.311 18:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:56.311 18:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:56.311 18:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:56.311 18:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:56.311 18:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:56.311 18:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:56.311 18:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:56.311 18:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:56.311 18:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:56.311 18:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:56.311 18:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.311 18:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.569 18:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:56.569 "name": "Existed_Raid", 00:17:56.569 "uuid": "d6ab8408-cf20-481a-87f1-6f80aa923cc1", 00:17:56.569 "strip_size_kb": 64, 00:17:56.569 "state": "online", 00:17:56.569 "raid_level": "raid0", 00:17:56.569 "superblock": true, 00:17:56.569 "num_base_bdevs": 4, 00:17:56.569 "num_base_bdevs_discovered": 4, 00:17:56.569 "num_base_bdevs_operational": 4, 00:17:56.569 "base_bdevs_list": [ 00:17:56.569 { 00:17:56.569 "name": "BaseBdev1", 00:17:56.569 "uuid": "1ea8ab38-cad0-4845-af8d-15bf5089cb3a", 00:17:56.569 "is_configured": true, 00:17:56.569 "data_offset": 2048, 00:17:56.569 "data_size": 63488 00:17:56.569 }, 00:17:56.569 { 00:17:56.569 "name": "BaseBdev2", 00:17:56.569 "uuid": "48eb7853-5e14-45d3-8d91-e0eaf3d7581c", 00:17:56.569 "is_configured": true, 00:17:56.569 "data_offset": 2048, 00:17:56.569 "data_size": 63488 00:17:56.569 }, 00:17:56.569 { 00:17:56.569 "name": "BaseBdev3", 00:17:56.569 "uuid": "29c506a2-43a0-4efb-8d88-3d544828ead3", 00:17:56.569 "is_configured": true, 00:17:56.569 "data_offset": 2048, 00:17:56.569 "data_size": 63488 00:17:56.569 }, 00:17:56.569 { 00:17:56.569 "name": "BaseBdev4", 00:17:56.569 "uuid": "47bb0413-910f-4d14-b62e-66340666d85b", 00:17:56.569 "is_configured": true, 00:17:56.569 "data_offset": 2048, 00:17:56.569 "data_size": 63488 00:17:56.569 } 00:17:56.569 ] 00:17:56.569 }' 00:17:56.569 18:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:56.569 18:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.135 18:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:57.135 18:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:57.135 18:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:57.135 18:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:57.135 18:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:57.135 18:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:57.135 18:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:57.135 18:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:57.394 [2024-07-15 18:32:42.826452] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:57.394 18:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:57.394 "name": "Existed_Raid", 00:17:57.394 "aliases": [ 00:17:57.394 "d6ab8408-cf20-481a-87f1-6f80aa923cc1" 00:17:57.394 ], 00:17:57.394 "product_name": "Raid Volume", 00:17:57.394 "block_size": 512, 00:17:57.394 "num_blocks": 253952, 00:17:57.394 "uuid": "d6ab8408-cf20-481a-87f1-6f80aa923cc1", 00:17:57.394 "assigned_rate_limits": { 00:17:57.394 "rw_ios_per_sec": 0, 00:17:57.394 "rw_mbytes_per_sec": 0, 00:17:57.394 "r_mbytes_per_sec": 0, 00:17:57.394 "w_mbytes_per_sec": 0 00:17:57.394 }, 00:17:57.394 "claimed": false, 00:17:57.394 "zoned": false, 00:17:57.394 "supported_io_types": { 00:17:57.394 "read": true, 00:17:57.394 "write": true, 00:17:57.394 "unmap": true, 00:17:57.394 "flush": true, 00:17:57.394 "reset": true, 00:17:57.394 "nvme_admin": false, 00:17:57.394 "nvme_io": false, 00:17:57.394 "nvme_io_md": false, 00:17:57.394 "write_zeroes": true, 00:17:57.394 "zcopy": false, 00:17:57.394 "get_zone_info": false, 00:17:57.394 "zone_management": false, 00:17:57.394 "zone_append": false, 00:17:57.394 "compare": false, 00:17:57.394 "compare_and_write": false, 00:17:57.394 "abort": false, 00:17:57.394 "seek_hole": false, 00:17:57.394 "seek_data": false, 00:17:57.394 "copy": false, 00:17:57.394 "nvme_iov_md": false 00:17:57.394 }, 00:17:57.394 "memory_domains": [ 00:17:57.394 { 00:17:57.394 "dma_device_id": "system", 00:17:57.394 "dma_device_type": 1 00:17:57.394 }, 00:17:57.394 { 00:17:57.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.394 "dma_device_type": 2 00:17:57.394 }, 00:17:57.394 { 00:17:57.394 "dma_device_id": "system", 00:17:57.394 "dma_device_type": 1 00:17:57.394 }, 00:17:57.394 { 00:17:57.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.394 "dma_device_type": 2 00:17:57.394 }, 00:17:57.394 { 00:17:57.394 "dma_device_id": "system", 00:17:57.394 "dma_device_type": 1 00:17:57.394 }, 00:17:57.394 { 00:17:57.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.394 "dma_device_type": 2 00:17:57.394 }, 00:17:57.394 { 00:17:57.394 "dma_device_id": "system", 00:17:57.394 "dma_device_type": 1 00:17:57.394 }, 00:17:57.394 { 00:17:57.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.394 "dma_device_type": 2 00:17:57.394 } 00:17:57.394 ], 00:17:57.394 "driver_specific": { 00:17:57.394 "raid": { 00:17:57.394 "uuid": "d6ab8408-cf20-481a-87f1-6f80aa923cc1", 00:17:57.394 "strip_size_kb": 64, 00:17:57.394 "state": "online", 00:17:57.394 "raid_level": "raid0", 00:17:57.394 "superblock": true, 00:17:57.394 "num_base_bdevs": 4, 00:17:57.394 "num_base_bdevs_discovered": 4, 00:17:57.394 "num_base_bdevs_operational": 4, 00:17:57.394 "base_bdevs_list": [ 00:17:57.394 { 00:17:57.394 "name": "BaseBdev1", 00:17:57.394 "uuid": "1ea8ab38-cad0-4845-af8d-15bf5089cb3a", 00:17:57.394 "is_configured": true, 00:17:57.394 "data_offset": 2048, 00:17:57.394 "data_size": 63488 00:17:57.394 }, 00:17:57.394 { 00:17:57.394 "name": "BaseBdev2", 00:17:57.394 "uuid": "48eb7853-5e14-45d3-8d91-e0eaf3d7581c", 00:17:57.394 "is_configured": true, 00:17:57.394 "data_offset": 2048, 00:17:57.394 "data_size": 63488 00:17:57.394 }, 00:17:57.394 { 00:17:57.394 "name": "BaseBdev3", 00:17:57.394 "uuid": "29c506a2-43a0-4efb-8d88-3d544828ead3", 00:17:57.394 "is_configured": true, 00:17:57.394 "data_offset": 2048, 00:17:57.394 "data_size": 63488 00:17:57.394 }, 00:17:57.394 { 00:17:57.394 "name": "BaseBdev4", 00:17:57.394 "uuid": "47bb0413-910f-4d14-b62e-66340666d85b", 00:17:57.394 "is_configured": true, 00:17:57.394 "data_offset": 2048, 00:17:57.394 "data_size": 63488 00:17:57.394 } 00:17:57.394 ] 00:17:57.394 } 00:17:57.394 } 00:17:57.394 }' 00:17:57.394 18:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:57.394 18:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:57.394 BaseBdev2 00:17:57.394 BaseBdev3 00:17:57.394 BaseBdev4' 00:17:57.394 18:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:57.394 18:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:57.394 18:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:57.652 18:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:57.652 "name": "BaseBdev1", 00:17:57.652 "aliases": [ 00:17:57.652 "1ea8ab38-cad0-4845-af8d-15bf5089cb3a" 00:17:57.652 ], 00:17:57.652 "product_name": "Malloc disk", 00:17:57.652 "block_size": 512, 00:17:57.652 "num_blocks": 65536, 00:17:57.652 "uuid": "1ea8ab38-cad0-4845-af8d-15bf5089cb3a", 00:17:57.652 "assigned_rate_limits": { 00:17:57.652 "rw_ios_per_sec": 0, 00:17:57.652 "rw_mbytes_per_sec": 0, 00:17:57.652 "r_mbytes_per_sec": 0, 00:17:57.652 "w_mbytes_per_sec": 0 00:17:57.652 }, 00:17:57.652 "claimed": true, 00:17:57.652 "claim_type": "exclusive_write", 00:17:57.652 "zoned": false, 00:17:57.652 "supported_io_types": { 00:17:57.652 "read": true, 00:17:57.652 "write": true, 00:17:57.652 "unmap": true, 00:17:57.652 "flush": true, 00:17:57.652 "reset": true, 00:17:57.652 "nvme_admin": false, 00:17:57.652 "nvme_io": false, 00:17:57.653 "nvme_io_md": false, 00:17:57.653 "write_zeroes": true, 00:17:57.653 "zcopy": true, 00:17:57.653 "get_zone_info": false, 00:17:57.653 "zone_management": false, 00:17:57.653 "zone_append": false, 00:17:57.653 "compare": false, 00:17:57.653 "compare_and_write": false, 00:17:57.653 "abort": true, 00:17:57.653 "seek_hole": false, 00:17:57.653 "seek_data": false, 00:17:57.653 "copy": true, 00:17:57.653 "nvme_iov_md": false 00:17:57.653 }, 00:17:57.653 "memory_domains": [ 00:17:57.653 { 00:17:57.653 "dma_device_id": "system", 00:17:57.653 "dma_device_type": 1 00:17:57.653 }, 00:17:57.653 { 00:17:57.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.653 "dma_device_type": 2 00:17:57.653 } 00:17:57.653 ], 00:17:57.653 "driver_specific": {} 00:17:57.653 }' 00:17:57.653 18:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:57.653 18:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:57.910 18:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:57.910 18:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:57.910 18:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:57.910 18:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:57.910 18:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:57.910 18:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:57.910 18:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:57.910 18:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:58.168 18:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:58.168 18:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:58.168 18:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:58.168 18:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:58.168 18:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:58.426 18:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:58.426 "name": "BaseBdev2", 00:17:58.426 "aliases": [ 00:17:58.426 "48eb7853-5e14-45d3-8d91-e0eaf3d7581c" 00:17:58.426 ], 00:17:58.426 "product_name": "Malloc disk", 00:17:58.426 "block_size": 512, 00:17:58.426 "num_blocks": 65536, 00:17:58.426 "uuid": "48eb7853-5e14-45d3-8d91-e0eaf3d7581c", 00:17:58.426 "assigned_rate_limits": { 00:17:58.426 "rw_ios_per_sec": 0, 00:17:58.426 "rw_mbytes_per_sec": 0, 00:17:58.426 "r_mbytes_per_sec": 0, 00:17:58.426 "w_mbytes_per_sec": 0 00:17:58.426 }, 00:17:58.426 "claimed": true, 00:17:58.426 "claim_type": "exclusive_write", 00:17:58.426 "zoned": false, 00:17:58.426 "supported_io_types": { 00:17:58.426 "read": true, 00:17:58.426 "write": true, 00:17:58.426 "unmap": true, 00:17:58.426 "flush": true, 00:17:58.426 "reset": true, 00:17:58.426 "nvme_admin": false, 00:17:58.426 "nvme_io": false, 00:17:58.426 "nvme_io_md": false, 00:17:58.426 "write_zeroes": true, 00:17:58.426 "zcopy": true, 00:17:58.426 "get_zone_info": false, 00:17:58.426 "zone_management": false, 00:17:58.426 "zone_append": false, 00:17:58.426 "compare": false, 00:17:58.426 "compare_and_write": false, 00:17:58.426 "abort": true, 00:17:58.426 "seek_hole": false, 00:17:58.426 "seek_data": false, 00:17:58.426 "copy": true, 00:17:58.426 "nvme_iov_md": false 00:17:58.426 }, 00:17:58.426 "memory_domains": [ 00:17:58.426 { 00:17:58.426 "dma_device_id": "system", 00:17:58.426 "dma_device_type": 1 00:17:58.426 }, 00:17:58.426 { 00:17:58.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.426 "dma_device_type": 2 00:17:58.426 } 00:17:58.426 ], 00:17:58.426 "driver_specific": {} 00:17:58.426 }' 00:17:58.426 18:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:58.426 18:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:58.426 18:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:58.426 18:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:58.426 18:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:58.426 18:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:58.426 18:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:58.683 18:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:58.683 18:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:58.683 18:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:58.683 18:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:58.683 18:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:58.683 18:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:58.683 18:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:58.683 18:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:58.941 18:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:58.941 "name": "BaseBdev3", 00:17:58.941 "aliases": [ 00:17:58.941 "29c506a2-43a0-4efb-8d88-3d544828ead3" 00:17:58.941 ], 00:17:58.941 "product_name": "Malloc disk", 00:17:58.941 "block_size": 512, 00:17:58.941 "num_blocks": 65536, 00:17:58.941 "uuid": "29c506a2-43a0-4efb-8d88-3d544828ead3", 00:17:58.941 "assigned_rate_limits": { 00:17:58.941 "rw_ios_per_sec": 0, 00:17:58.941 "rw_mbytes_per_sec": 0, 00:17:58.941 "r_mbytes_per_sec": 0, 00:17:58.941 "w_mbytes_per_sec": 0 00:17:58.941 }, 00:17:58.941 "claimed": true, 00:17:58.941 "claim_type": "exclusive_write", 00:17:58.941 "zoned": false, 00:17:58.941 "supported_io_types": { 00:17:58.941 "read": true, 00:17:58.941 "write": true, 00:17:58.941 "unmap": true, 00:17:58.941 "flush": true, 00:17:58.941 "reset": true, 00:17:58.941 "nvme_admin": false, 00:17:58.941 "nvme_io": false, 00:17:58.941 "nvme_io_md": false, 00:17:58.941 "write_zeroes": true, 00:17:58.941 "zcopy": true, 00:17:58.941 "get_zone_info": false, 00:17:58.941 "zone_management": false, 00:17:58.941 "zone_append": false, 00:17:58.941 "compare": false, 00:17:58.941 "compare_and_write": false, 00:17:58.941 "abort": true, 00:17:58.941 "seek_hole": false, 00:17:58.941 "seek_data": false, 00:17:58.941 "copy": true, 00:17:58.941 "nvme_iov_md": false 00:17:58.941 }, 00:17:58.941 "memory_domains": [ 00:17:58.941 { 00:17:58.941 "dma_device_id": "system", 00:17:58.941 "dma_device_type": 1 00:17:58.941 }, 00:17:58.941 { 00:17:58.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.941 "dma_device_type": 2 00:17:58.941 } 00:17:58.941 ], 00:17:58.941 "driver_specific": {} 00:17:58.941 }' 00:17:58.941 18:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:58.941 18:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:58.941 18:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:58.941 18:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:59.199 18:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:59.199 18:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:59.199 18:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:59.199 18:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:59.199 18:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:59.199 18:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:59.199 18:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:59.456 18:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:59.456 18:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:59.456 18:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:17:59.456 18:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:59.713 18:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:59.713 "name": "BaseBdev4", 00:17:59.713 "aliases": [ 00:17:59.713 "47bb0413-910f-4d14-b62e-66340666d85b" 00:17:59.713 ], 00:17:59.713 "product_name": "Malloc disk", 00:17:59.713 "block_size": 512, 00:17:59.713 "num_blocks": 65536, 00:17:59.713 "uuid": "47bb0413-910f-4d14-b62e-66340666d85b", 00:17:59.713 "assigned_rate_limits": { 00:17:59.713 "rw_ios_per_sec": 0, 00:17:59.713 "rw_mbytes_per_sec": 0, 00:17:59.713 "r_mbytes_per_sec": 0, 00:17:59.713 "w_mbytes_per_sec": 0 00:17:59.713 }, 00:17:59.713 "claimed": true, 00:17:59.713 "claim_type": "exclusive_write", 00:17:59.713 "zoned": false, 00:17:59.713 "supported_io_types": { 00:17:59.713 "read": true, 00:17:59.713 "write": true, 00:17:59.713 "unmap": true, 00:17:59.713 "flush": true, 00:17:59.713 "reset": true, 00:17:59.713 "nvme_admin": false, 00:17:59.713 "nvme_io": false, 00:17:59.713 "nvme_io_md": false, 00:17:59.713 "write_zeroes": true, 00:17:59.713 "zcopy": true, 00:17:59.713 "get_zone_info": false, 00:17:59.713 "zone_management": false, 00:17:59.713 "zone_append": false, 00:17:59.713 "compare": false, 00:17:59.713 "compare_and_write": false, 00:17:59.713 "abort": true, 00:17:59.713 "seek_hole": false, 00:17:59.713 "seek_data": false, 00:17:59.713 "copy": true, 00:17:59.713 "nvme_iov_md": false 00:17:59.713 }, 00:17:59.713 "memory_domains": [ 00:17:59.713 { 00:17:59.713 "dma_device_id": "system", 00:17:59.713 "dma_device_type": 1 00:17:59.713 }, 00:17:59.713 { 00:17:59.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.713 "dma_device_type": 2 00:17:59.713 } 00:17:59.713 ], 00:17:59.713 "driver_specific": {} 00:17:59.713 }' 00:17:59.713 18:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:59.713 18:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:59.713 18:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:59.713 18:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:59.713 18:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:59.713 18:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:59.713 18:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:59.713 18:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:59.971 18:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:59.971 18:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:59.971 18:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:59.971 18:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:59.971 18:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:00.228 [2024-07-15 18:32:45.629712] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:00.228 [2024-07-15 18:32:45.629735] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:00.228 [2024-07-15 18:32:45.629781] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:00.228 18:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:00.228 18:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:18:00.228 18:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:00.228 18:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:18:00.229 18:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:18:00.229 18:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:18:00.229 18:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:00.229 18:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:18:00.229 18:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:00.229 18:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:00.229 18:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:00.229 18:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:00.229 18:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:00.229 18:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:00.229 18:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:00.229 18:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.229 18:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.487 18:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:00.487 "name": "Existed_Raid", 00:18:00.487 "uuid": "d6ab8408-cf20-481a-87f1-6f80aa923cc1", 00:18:00.487 "strip_size_kb": 64, 00:18:00.487 "state": "offline", 00:18:00.487 "raid_level": "raid0", 00:18:00.487 "superblock": true, 00:18:00.487 "num_base_bdevs": 4, 00:18:00.487 "num_base_bdevs_discovered": 3, 00:18:00.487 "num_base_bdevs_operational": 3, 00:18:00.487 "base_bdevs_list": [ 00:18:00.487 { 00:18:00.487 "name": null, 00:18:00.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.487 "is_configured": false, 00:18:00.487 "data_offset": 2048, 00:18:00.487 "data_size": 63488 00:18:00.487 }, 00:18:00.487 { 00:18:00.487 "name": "BaseBdev2", 00:18:00.487 "uuid": "48eb7853-5e14-45d3-8d91-e0eaf3d7581c", 00:18:00.487 "is_configured": true, 00:18:00.487 "data_offset": 2048, 00:18:00.487 "data_size": 63488 00:18:00.487 }, 00:18:00.487 { 00:18:00.487 "name": "BaseBdev3", 00:18:00.487 "uuid": "29c506a2-43a0-4efb-8d88-3d544828ead3", 00:18:00.487 "is_configured": true, 00:18:00.487 "data_offset": 2048, 00:18:00.487 "data_size": 63488 00:18:00.487 }, 00:18:00.487 { 00:18:00.487 "name": "BaseBdev4", 00:18:00.487 "uuid": "47bb0413-910f-4d14-b62e-66340666d85b", 00:18:00.487 "is_configured": true, 00:18:00.487 "data_offset": 2048, 00:18:00.487 "data_size": 63488 00:18:00.487 } 00:18:00.487 ] 00:18:00.487 }' 00:18:00.487 18:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:00.487 18:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.054 18:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:01.054 18:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:01.054 18:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.054 18:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:01.313 18:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:01.313 18:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:01.313 18:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:01.572 [2024-07-15 18:32:47.026637] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:01.572 18:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:01.572 18:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:01.572 18:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.572 18:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:01.831 18:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:01.831 18:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:01.831 18:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:02.089 [2024-07-15 18:32:47.534425] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:02.089 18:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:02.089 18:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:02.089 18:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.089 18:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:02.347 18:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:02.347 18:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:02.347 18:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:02.606 [2024-07-15 18:32:48.046412] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:02.606 [2024-07-15 18:32:48.046451] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd4a490 name Existed_Raid, state offline 00:18:02.606 18:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:02.606 18:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:02.606 18:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.606 18:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:02.864 18:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:02.864 18:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:02.864 18:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:18:02.864 18:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:02.864 18:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:02.864 18:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:03.122 BaseBdev2 00:18:03.122 18:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:03.122 18:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:03.122 18:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:03.122 18:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:03.123 18:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:03.123 18:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:03.123 18:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:03.381 18:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:03.639 [ 00:18:03.639 { 00:18:03.639 "name": "BaseBdev2", 00:18:03.639 "aliases": [ 00:18:03.639 "45eb7d2b-e0d2-4de9-9f1d-87c22e215db6" 00:18:03.639 ], 00:18:03.639 "product_name": "Malloc disk", 00:18:03.639 "block_size": 512, 00:18:03.639 "num_blocks": 65536, 00:18:03.639 "uuid": "45eb7d2b-e0d2-4de9-9f1d-87c22e215db6", 00:18:03.639 "assigned_rate_limits": { 00:18:03.639 "rw_ios_per_sec": 0, 00:18:03.639 "rw_mbytes_per_sec": 0, 00:18:03.639 "r_mbytes_per_sec": 0, 00:18:03.639 "w_mbytes_per_sec": 0 00:18:03.639 }, 00:18:03.639 "claimed": false, 00:18:03.639 "zoned": false, 00:18:03.639 "supported_io_types": { 00:18:03.639 "read": true, 00:18:03.639 "write": true, 00:18:03.639 "unmap": true, 00:18:03.639 "flush": true, 00:18:03.639 "reset": true, 00:18:03.639 "nvme_admin": false, 00:18:03.639 "nvme_io": false, 00:18:03.639 "nvme_io_md": false, 00:18:03.639 "write_zeroes": true, 00:18:03.639 "zcopy": true, 00:18:03.639 "get_zone_info": false, 00:18:03.639 "zone_management": false, 00:18:03.639 "zone_append": false, 00:18:03.639 "compare": false, 00:18:03.639 "compare_and_write": false, 00:18:03.639 "abort": true, 00:18:03.639 "seek_hole": false, 00:18:03.639 "seek_data": false, 00:18:03.639 "copy": true, 00:18:03.639 "nvme_iov_md": false 00:18:03.639 }, 00:18:03.639 "memory_domains": [ 00:18:03.639 { 00:18:03.639 "dma_device_id": "system", 00:18:03.639 "dma_device_type": 1 00:18:03.639 }, 00:18:03.639 { 00:18:03.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.639 "dma_device_type": 2 00:18:03.639 } 00:18:03.639 ], 00:18:03.639 "driver_specific": {} 00:18:03.639 } 00:18:03.639 ] 00:18:03.639 18:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:03.639 18:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:03.639 18:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:03.639 18:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:03.899 BaseBdev3 00:18:03.899 18:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:03.899 18:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:03.899 18:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:03.899 18:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:03.899 18:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:03.899 18:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:03.899 18:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:04.160 18:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:04.444 [ 00:18:04.444 { 00:18:04.444 "name": "BaseBdev3", 00:18:04.444 "aliases": [ 00:18:04.444 "8e69bbb8-67f2-4960-a7e8-0fa4656b34e0" 00:18:04.444 ], 00:18:04.444 "product_name": "Malloc disk", 00:18:04.444 "block_size": 512, 00:18:04.444 "num_blocks": 65536, 00:18:04.444 "uuid": "8e69bbb8-67f2-4960-a7e8-0fa4656b34e0", 00:18:04.444 "assigned_rate_limits": { 00:18:04.444 "rw_ios_per_sec": 0, 00:18:04.444 "rw_mbytes_per_sec": 0, 00:18:04.444 "r_mbytes_per_sec": 0, 00:18:04.444 "w_mbytes_per_sec": 0 00:18:04.444 }, 00:18:04.444 "claimed": false, 00:18:04.444 "zoned": false, 00:18:04.444 "supported_io_types": { 00:18:04.444 "read": true, 00:18:04.444 "write": true, 00:18:04.444 "unmap": true, 00:18:04.444 "flush": true, 00:18:04.444 "reset": true, 00:18:04.444 "nvme_admin": false, 00:18:04.444 "nvme_io": false, 00:18:04.444 "nvme_io_md": false, 00:18:04.444 "write_zeroes": true, 00:18:04.444 "zcopy": true, 00:18:04.444 "get_zone_info": false, 00:18:04.444 "zone_management": false, 00:18:04.444 "zone_append": false, 00:18:04.444 "compare": false, 00:18:04.444 "compare_and_write": false, 00:18:04.444 "abort": true, 00:18:04.444 "seek_hole": false, 00:18:04.444 "seek_data": false, 00:18:04.444 "copy": true, 00:18:04.444 "nvme_iov_md": false 00:18:04.444 }, 00:18:04.444 "memory_domains": [ 00:18:04.444 { 00:18:04.444 "dma_device_id": "system", 00:18:04.444 "dma_device_type": 1 00:18:04.444 }, 00:18:04.444 { 00:18:04.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.444 "dma_device_type": 2 00:18:04.444 } 00:18:04.444 ], 00:18:04.444 "driver_specific": {} 00:18:04.444 } 00:18:04.444 ] 00:18:04.444 18:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:04.444 18:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:04.444 18:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:04.444 18:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:04.713 BaseBdev4 00:18:04.713 18:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:18:04.713 18:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:18:04.713 18:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:04.713 18:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:04.713 18:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:04.713 18:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:04.713 18:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:04.972 18:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:05.230 [ 00:18:05.230 { 00:18:05.230 "name": "BaseBdev4", 00:18:05.230 "aliases": [ 00:18:05.230 "35ea9bee-8e72-42d8-b981-81eebbd2c0d6" 00:18:05.230 ], 00:18:05.230 "product_name": "Malloc disk", 00:18:05.230 "block_size": 512, 00:18:05.230 "num_blocks": 65536, 00:18:05.230 "uuid": "35ea9bee-8e72-42d8-b981-81eebbd2c0d6", 00:18:05.230 "assigned_rate_limits": { 00:18:05.230 "rw_ios_per_sec": 0, 00:18:05.230 "rw_mbytes_per_sec": 0, 00:18:05.230 "r_mbytes_per_sec": 0, 00:18:05.230 "w_mbytes_per_sec": 0 00:18:05.230 }, 00:18:05.230 "claimed": false, 00:18:05.230 "zoned": false, 00:18:05.230 "supported_io_types": { 00:18:05.230 "read": true, 00:18:05.230 "write": true, 00:18:05.230 "unmap": true, 00:18:05.230 "flush": true, 00:18:05.230 "reset": true, 00:18:05.230 "nvme_admin": false, 00:18:05.230 "nvme_io": false, 00:18:05.230 "nvme_io_md": false, 00:18:05.230 "write_zeroes": true, 00:18:05.230 "zcopy": true, 00:18:05.230 "get_zone_info": false, 00:18:05.230 "zone_management": false, 00:18:05.230 "zone_append": false, 00:18:05.230 "compare": false, 00:18:05.230 "compare_and_write": false, 00:18:05.230 "abort": true, 00:18:05.230 "seek_hole": false, 00:18:05.230 "seek_data": false, 00:18:05.231 "copy": true, 00:18:05.231 "nvme_iov_md": false 00:18:05.231 }, 00:18:05.231 "memory_domains": [ 00:18:05.231 { 00:18:05.231 "dma_device_id": "system", 00:18:05.231 "dma_device_type": 1 00:18:05.231 }, 00:18:05.231 { 00:18:05.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.231 "dma_device_type": 2 00:18:05.231 } 00:18:05.231 ], 00:18:05.231 "driver_specific": {} 00:18:05.231 } 00:18:05.231 ] 00:18:05.231 18:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:05.231 18:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:05.231 18:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:05.231 18:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:05.489 [2024-07-15 18:32:50.866059] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:05.489 [2024-07-15 18:32:50.866097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:05.489 [2024-07-15 18:32:50.866116] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:05.489 [2024-07-15 18:32:50.867519] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:05.489 [2024-07-15 18:32:50.867562] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:05.489 18:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:05.489 18:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:05.489 18:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:05.490 18:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:05.490 18:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:05.490 18:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:05.490 18:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:05.490 18:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:05.490 18:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:05.490 18:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:05.490 18:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.490 18:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.749 18:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:05.749 "name": "Existed_Raid", 00:18:05.749 "uuid": "46455f96-3212-41c6-97a8-cd9f683c9605", 00:18:05.749 "strip_size_kb": 64, 00:18:05.749 "state": "configuring", 00:18:05.749 "raid_level": "raid0", 00:18:05.749 "superblock": true, 00:18:05.749 "num_base_bdevs": 4, 00:18:05.749 "num_base_bdevs_discovered": 3, 00:18:05.749 "num_base_bdevs_operational": 4, 00:18:05.749 "base_bdevs_list": [ 00:18:05.749 { 00:18:05.749 "name": "BaseBdev1", 00:18:05.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.749 "is_configured": false, 00:18:05.749 "data_offset": 0, 00:18:05.749 "data_size": 0 00:18:05.749 }, 00:18:05.749 { 00:18:05.749 "name": "BaseBdev2", 00:18:05.749 "uuid": "45eb7d2b-e0d2-4de9-9f1d-87c22e215db6", 00:18:05.749 "is_configured": true, 00:18:05.749 "data_offset": 2048, 00:18:05.749 "data_size": 63488 00:18:05.749 }, 00:18:05.749 { 00:18:05.749 "name": "BaseBdev3", 00:18:05.749 "uuid": "8e69bbb8-67f2-4960-a7e8-0fa4656b34e0", 00:18:05.749 "is_configured": true, 00:18:05.749 "data_offset": 2048, 00:18:05.749 "data_size": 63488 00:18:05.749 }, 00:18:05.749 { 00:18:05.749 "name": "BaseBdev4", 00:18:05.749 "uuid": "35ea9bee-8e72-42d8-b981-81eebbd2c0d6", 00:18:05.749 "is_configured": true, 00:18:05.749 "data_offset": 2048, 00:18:05.749 "data_size": 63488 00:18:05.749 } 00:18:05.749 ] 00:18:05.749 }' 00:18:05.749 18:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:05.749 18:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.316 18:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:06.575 [2024-07-15 18:32:51.993064] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:06.575 18:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:06.575 18:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:06.575 18:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:06.575 18:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:06.575 18:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:06.575 18:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:06.575 18:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:06.575 18:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:06.575 18:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:06.575 18:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:06.575 18:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.575 18:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.834 18:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:06.834 "name": "Existed_Raid", 00:18:06.834 "uuid": "46455f96-3212-41c6-97a8-cd9f683c9605", 00:18:06.834 "strip_size_kb": 64, 00:18:06.834 "state": "configuring", 00:18:06.834 "raid_level": "raid0", 00:18:06.834 "superblock": true, 00:18:06.834 "num_base_bdevs": 4, 00:18:06.834 "num_base_bdevs_discovered": 2, 00:18:06.834 "num_base_bdevs_operational": 4, 00:18:06.834 "base_bdevs_list": [ 00:18:06.834 { 00:18:06.834 "name": "BaseBdev1", 00:18:06.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.834 "is_configured": false, 00:18:06.834 "data_offset": 0, 00:18:06.834 "data_size": 0 00:18:06.834 }, 00:18:06.834 { 00:18:06.834 "name": null, 00:18:06.834 "uuid": "45eb7d2b-e0d2-4de9-9f1d-87c22e215db6", 00:18:06.834 "is_configured": false, 00:18:06.834 "data_offset": 2048, 00:18:06.834 "data_size": 63488 00:18:06.834 }, 00:18:06.834 { 00:18:06.834 "name": "BaseBdev3", 00:18:06.834 "uuid": "8e69bbb8-67f2-4960-a7e8-0fa4656b34e0", 00:18:06.834 "is_configured": true, 00:18:06.834 "data_offset": 2048, 00:18:06.834 "data_size": 63488 00:18:06.834 }, 00:18:06.834 { 00:18:06.834 "name": "BaseBdev4", 00:18:06.834 "uuid": "35ea9bee-8e72-42d8-b981-81eebbd2c0d6", 00:18:06.834 "is_configured": true, 00:18:06.834 "data_offset": 2048, 00:18:06.834 "data_size": 63488 00:18:06.834 } 00:18:06.834 ] 00:18:06.834 }' 00:18:06.834 18:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:06.834 18:32:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.770 18:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.770 18:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:08.028 18:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:18:08.028 18:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:08.287 [2024-07-15 18:32:53.636813] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:08.287 BaseBdev1 00:18:08.287 18:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:18:08.287 18:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:08.287 18:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:08.287 18:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:08.287 18:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:08.287 18:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:08.287 18:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:08.545 18:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:08.804 [ 00:18:08.804 { 00:18:08.804 "name": "BaseBdev1", 00:18:08.804 "aliases": [ 00:18:08.804 "c030ff9a-75a9-4b02-ac34-e18270be0f55" 00:18:08.804 ], 00:18:08.804 "product_name": "Malloc disk", 00:18:08.804 "block_size": 512, 00:18:08.804 "num_blocks": 65536, 00:18:08.804 "uuid": "c030ff9a-75a9-4b02-ac34-e18270be0f55", 00:18:08.804 "assigned_rate_limits": { 00:18:08.804 "rw_ios_per_sec": 0, 00:18:08.804 "rw_mbytes_per_sec": 0, 00:18:08.804 "r_mbytes_per_sec": 0, 00:18:08.804 "w_mbytes_per_sec": 0 00:18:08.804 }, 00:18:08.804 "claimed": true, 00:18:08.804 "claim_type": "exclusive_write", 00:18:08.804 "zoned": false, 00:18:08.804 "supported_io_types": { 00:18:08.804 "read": true, 00:18:08.804 "write": true, 00:18:08.804 "unmap": true, 00:18:08.804 "flush": true, 00:18:08.804 "reset": true, 00:18:08.804 "nvme_admin": false, 00:18:08.804 "nvme_io": false, 00:18:08.804 "nvme_io_md": false, 00:18:08.804 "write_zeroes": true, 00:18:08.804 "zcopy": true, 00:18:08.804 "get_zone_info": false, 00:18:08.804 "zone_management": false, 00:18:08.804 "zone_append": false, 00:18:08.804 "compare": false, 00:18:08.804 "compare_and_write": false, 00:18:08.804 "abort": true, 00:18:08.804 "seek_hole": false, 00:18:08.804 "seek_data": false, 00:18:08.804 "copy": true, 00:18:08.804 "nvme_iov_md": false 00:18:08.804 }, 00:18:08.804 "memory_domains": [ 00:18:08.804 { 00:18:08.804 "dma_device_id": "system", 00:18:08.804 "dma_device_type": 1 00:18:08.804 }, 00:18:08.804 { 00:18:08.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.804 "dma_device_type": 2 00:18:08.804 } 00:18:08.804 ], 00:18:08.805 "driver_specific": {} 00:18:08.805 } 00:18:08.805 ] 00:18:08.805 18:32:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:08.805 18:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:08.805 18:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:08.805 18:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:08.805 18:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:08.805 18:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:08.805 18:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:08.805 18:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:08.805 18:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:08.805 18:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:08.805 18:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:08.805 18:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.805 18:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.064 18:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:09.064 "name": "Existed_Raid", 00:18:09.064 "uuid": "46455f96-3212-41c6-97a8-cd9f683c9605", 00:18:09.064 "strip_size_kb": 64, 00:18:09.064 "state": "configuring", 00:18:09.064 "raid_level": "raid0", 00:18:09.064 "superblock": true, 00:18:09.064 "num_base_bdevs": 4, 00:18:09.064 "num_base_bdevs_discovered": 3, 00:18:09.064 "num_base_bdevs_operational": 4, 00:18:09.064 "base_bdevs_list": [ 00:18:09.064 { 00:18:09.064 "name": "BaseBdev1", 00:18:09.064 "uuid": "c030ff9a-75a9-4b02-ac34-e18270be0f55", 00:18:09.064 "is_configured": true, 00:18:09.064 "data_offset": 2048, 00:18:09.064 "data_size": 63488 00:18:09.064 }, 00:18:09.064 { 00:18:09.064 "name": null, 00:18:09.064 "uuid": "45eb7d2b-e0d2-4de9-9f1d-87c22e215db6", 00:18:09.064 "is_configured": false, 00:18:09.064 "data_offset": 2048, 00:18:09.064 "data_size": 63488 00:18:09.064 }, 00:18:09.064 { 00:18:09.064 "name": "BaseBdev3", 00:18:09.064 "uuid": "8e69bbb8-67f2-4960-a7e8-0fa4656b34e0", 00:18:09.064 "is_configured": true, 00:18:09.064 "data_offset": 2048, 00:18:09.064 "data_size": 63488 00:18:09.064 }, 00:18:09.064 { 00:18:09.064 "name": "BaseBdev4", 00:18:09.064 "uuid": "35ea9bee-8e72-42d8-b981-81eebbd2c0d6", 00:18:09.064 "is_configured": true, 00:18:09.064 "data_offset": 2048, 00:18:09.064 "data_size": 63488 00:18:09.064 } 00:18:09.064 ] 00:18:09.064 }' 00:18:09.064 18:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:09.064 18:32:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.631 18:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.631 18:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:09.890 18:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:18:09.890 18:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:10.150 [2024-07-15 18:32:55.521933] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:10.150 18:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:10.150 18:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:10.150 18:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:10.150 18:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:10.150 18:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:10.150 18:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:10.150 18:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:10.150 18:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:10.150 18:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:10.150 18:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:10.150 18:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.150 18:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.409 18:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:10.409 "name": "Existed_Raid", 00:18:10.409 "uuid": "46455f96-3212-41c6-97a8-cd9f683c9605", 00:18:10.409 "strip_size_kb": 64, 00:18:10.409 "state": "configuring", 00:18:10.409 "raid_level": "raid0", 00:18:10.409 "superblock": true, 00:18:10.409 "num_base_bdevs": 4, 00:18:10.409 "num_base_bdevs_discovered": 2, 00:18:10.409 "num_base_bdevs_operational": 4, 00:18:10.409 "base_bdevs_list": [ 00:18:10.409 { 00:18:10.409 "name": "BaseBdev1", 00:18:10.409 "uuid": "c030ff9a-75a9-4b02-ac34-e18270be0f55", 00:18:10.409 "is_configured": true, 00:18:10.409 "data_offset": 2048, 00:18:10.409 "data_size": 63488 00:18:10.409 }, 00:18:10.409 { 00:18:10.409 "name": null, 00:18:10.409 "uuid": "45eb7d2b-e0d2-4de9-9f1d-87c22e215db6", 00:18:10.409 "is_configured": false, 00:18:10.409 "data_offset": 2048, 00:18:10.409 "data_size": 63488 00:18:10.409 }, 00:18:10.409 { 00:18:10.409 "name": null, 00:18:10.409 "uuid": "8e69bbb8-67f2-4960-a7e8-0fa4656b34e0", 00:18:10.409 "is_configured": false, 00:18:10.409 "data_offset": 2048, 00:18:10.409 "data_size": 63488 00:18:10.409 }, 00:18:10.409 { 00:18:10.409 "name": "BaseBdev4", 00:18:10.409 "uuid": "35ea9bee-8e72-42d8-b981-81eebbd2c0d6", 00:18:10.409 "is_configured": true, 00:18:10.409 "data_offset": 2048, 00:18:10.409 "data_size": 63488 00:18:10.409 } 00:18:10.409 ] 00:18:10.409 }' 00:18:10.409 18:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:10.409 18:32:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.975 18:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:10.975 18:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.234 18:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:11.234 18:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:11.494 [2024-07-15 18:32:56.921700] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:11.494 18:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:11.494 18:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:11.494 18:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:11.494 18:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:11.494 18:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:11.494 18:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:11.494 18:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:11.494 18:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:11.494 18:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:11.494 18:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:11.494 18:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.494 18:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.752 18:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:11.752 "name": "Existed_Raid", 00:18:11.752 "uuid": "46455f96-3212-41c6-97a8-cd9f683c9605", 00:18:11.752 "strip_size_kb": 64, 00:18:11.752 "state": "configuring", 00:18:11.752 "raid_level": "raid0", 00:18:11.752 "superblock": true, 00:18:11.752 "num_base_bdevs": 4, 00:18:11.752 "num_base_bdevs_discovered": 3, 00:18:11.752 "num_base_bdevs_operational": 4, 00:18:11.752 "base_bdevs_list": [ 00:18:11.752 { 00:18:11.752 "name": "BaseBdev1", 00:18:11.752 "uuid": "c030ff9a-75a9-4b02-ac34-e18270be0f55", 00:18:11.752 "is_configured": true, 00:18:11.752 "data_offset": 2048, 00:18:11.752 "data_size": 63488 00:18:11.752 }, 00:18:11.752 { 00:18:11.752 "name": null, 00:18:11.752 "uuid": "45eb7d2b-e0d2-4de9-9f1d-87c22e215db6", 00:18:11.752 "is_configured": false, 00:18:11.752 "data_offset": 2048, 00:18:11.752 "data_size": 63488 00:18:11.752 }, 00:18:11.752 { 00:18:11.752 "name": "BaseBdev3", 00:18:11.752 "uuid": "8e69bbb8-67f2-4960-a7e8-0fa4656b34e0", 00:18:11.752 "is_configured": true, 00:18:11.752 "data_offset": 2048, 00:18:11.752 "data_size": 63488 00:18:11.752 }, 00:18:11.752 { 00:18:11.752 "name": "BaseBdev4", 00:18:11.752 "uuid": "35ea9bee-8e72-42d8-b981-81eebbd2c0d6", 00:18:11.752 "is_configured": true, 00:18:11.752 "data_offset": 2048, 00:18:11.752 "data_size": 63488 00:18:11.752 } 00:18:11.752 ] 00:18:11.752 }' 00:18:11.752 18:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:11.752 18:32:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.320 18:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.320 18:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:12.579 18:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:12.579 18:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:12.839 [2024-07-15 18:32:58.309440] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:12.839 18:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:12.839 18:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:12.839 18:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:12.839 18:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:12.839 18:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:12.839 18:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:12.839 18:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:12.839 18:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:12.839 18:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:12.839 18:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:12.839 18:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.839 18:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.098 18:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:13.098 "name": "Existed_Raid", 00:18:13.098 "uuid": "46455f96-3212-41c6-97a8-cd9f683c9605", 00:18:13.098 "strip_size_kb": 64, 00:18:13.098 "state": "configuring", 00:18:13.098 "raid_level": "raid0", 00:18:13.098 "superblock": true, 00:18:13.098 "num_base_bdevs": 4, 00:18:13.098 "num_base_bdevs_discovered": 2, 00:18:13.098 "num_base_bdevs_operational": 4, 00:18:13.098 "base_bdevs_list": [ 00:18:13.098 { 00:18:13.098 "name": null, 00:18:13.098 "uuid": "c030ff9a-75a9-4b02-ac34-e18270be0f55", 00:18:13.098 "is_configured": false, 00:18:13.098 "data_offset": 2048, 00:18:13.098 "data_size": 63488 00:18:13.098 }, 00:18:13.098 { 00:18:13.098 "name": null, 00:18:13.098 "uuid": "45eb7d2b-e0d2-4de9-9f1d-87c22e215db6", 00:18:13.098 "is_configured": false, 00:18:13.098 "data_offset": 2048, 00:18:13.098 "data_size": 63488 00:18:13.098 }, 00:18:13.098 { 00:18:13.098 "name": "BaseBdev3", 00:18:13.098 "uuid": "8e69bbb8-67f2-4960-a7e8-0fa4656b34e0", 00:18:13.098 "is_configured": true, 00:18:13.098 "data_offset": 2048, 00:18:13.098 "data_size": 63488 00:18:13.098 }, 00:18:13.098 { 00:18:13.098 "name": "BaseBdev4", 00:18:13.098 "uuid": "35ea9bee-8e72-42d8-b981-81eebbd2c0d6", 00:18:13.098 "is_configured": true, 00:18:13.098 "data_offset": 2048, 00:18:13.098 "data_size": 63488 00:18:13.098 } 00:18:13.098 ] 00:18:13.098 }' 00:18:13.098 18:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:13.098 18:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.035 18:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.035 18:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:14.035 18:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:14.035 18:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:14.294 [2024-07-15 18:32:59.747935] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:14.294 18:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:14.294 18:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:14.294 18:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:14.294 18:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:14.294 18:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:14.294 18:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:14.294 18:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:14.294 18:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:14.294 18:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:14.294 18:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:14.294 18:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.294 18:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.553 18:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:14.553 "name": "Existed_Raid", 00:18:14.554 "uuid": "46455f96-3212-41c6-97a8-cd9f683c9605", 00:18:14.554 "strip_size_kb": 64, 00:18:14.554 "state": "configuring", 00:18:14.554 "raid_level": "raid0", 00:18:14.554 "superblock": true, 00:18:14.554 "num_base_bdevs": 4, 00:18:14.554 "num_base_bdevs_discovered": 3, 00:18:14.554 "num_base_bdevs_operational": 4, 00:18:14.554 "base_bdevs_list": [ 00:18:14.554 { 00:18:14.554 "name": null, 00:18:14.554 "uuid": "c030ff9a-75a9-4b02-ac34-e18270be0f55", 00:18:14.554 "is_configured": false, 00:18:14.554 "data_offset": 2048, 00:18:14.554 "data_size": 63488 00:18:14.554 }, 00:18:14.554 { 00:18:14.554 "name": "BaseBdev2", 00:18:14.554 "uuid": "45eb7d2b-e0d2-4de9-9f1d-87c22e215db6", 00:18:14.554 "is_configured": true, 00:18:14.554 "data_offset": 2048, 00:18:14.554 "data_size": 63488 00:18:14.554 }, 00:18:14.554 { 00:18:14.554 "name": "BaseBdev3", 00:18:14.554 "uuid": "8e69bbb8-67f2-4960-a7e8-0fa4656b34e0", 00:18:14.554 "is_configured": true, 00:18:14.554 "data_offset": 2048, 00:18:14.554 "data_size": 63488 00:18:14.554 }, 00:18:14.554 { 00:18:14.554 "name": "BaseBdev4", 00:18:14.554 "uuid": "35ea9bee-8e72-42d8-b981-81eebbd2c0d6", 00:18:14.554 "is_configured": true, 00:18:14.554 "data_offset": 2048, 00:18:14.554 "data_size": 63488 00:18:14.554 } 00:18:14.554 ] 00:18:14.554 }' 00:18:14.554 18:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:14.554 18:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.491 18:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.491 18:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:15.491 18:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:18:15.491 18:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.491 18:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:15.749 18:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u c030ff9a-75a9-4b02-ac34-e18270be0f55 00:18:16.009 [2024-07-15 18:33:01.447718] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:16.009 [2024-07-15 18:33:01.447871] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xd40de0 00:18:16.009 [2024-07-15 18:33:01.447884] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:16.009 [2024-07-15 18:33:01.448084] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd418e0 00:18:16.009 [2024-07-15 18:33:01.448207] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd40de0 00:18:16.009 [2024-07-15 18:33:01.448215] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xd40de0 00:18:16.009 [2024-07-15 18:33:01.448308] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.009 NewBaseBdev 00:18:16.009 18:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:18:16.009 18:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:18:16.009 18:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:16.009 18:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:16.009 18:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:16.009 18:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:16.009 18:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:16.268 18:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:16.526 [ 00:18:16.526 { 00:18:16.526 "name": "NewBaseBdev", 00:18:16.526 "aliases": [ 00:18:16.526 "c030ff9a-75a9-4b02-ac34-e18270be0f55" 00:18:16.526 ], 00:18:16.526 "product_name": "Malloc disk", 00:18:16.526 "block_size": 512, 00:18:16.526 "num_blocks": 65536, 00:18:16.526 "uuid": "c030ff9a-75a9-4b02-ac34-e18270be0f55", 00:18:16.526 "assigned_rate_limits": { 00:18:16.526 "rw_ios_per_sec": 0, 00:18:16.526 "rw_mbytes_per_sec": 0, 00:18:16.526 "r_mbytes_per_sec": 0, 00:18:16.526 "w_mbytes_per_sec": 0 00:18:16.526 }, 00:18:16.526 "claimed": true, 00:18:16.526 "claim_type": "exclusive_write", 00:18:16.526 "zoned": false, 00:18:16.526 "supported_io_types": { 00:18:16.526 "read": true, 00:18:16.526 "write": true, 00:18:16.526 "unmap": true, 00:18:16.526 "flush": true, 00:18:16.526 "reset": true, 00:18:16.526 "nvme_admin": false, 00:18:16.526 "nvme_io": false, 00:18:16.526 "nvme_io_md": false, 00:18:16.527 "write_zeroes": true, 00:18:16.527 "zcopy": true, 00:18:16.527 "get_zone_info": false, 00:18:16.527 "zone_management": false, 00:18:16.527 "zone_append": false, 00:18:16.527 "compare": false, 00:18:16.527 "compare_and_write": false, 00:18:16.527 "abort": true, 00:18:16.527 "seek_hole": false, 00:18:16.527 "seek_data": false, 00:18:16.527 "copy": true, 00:18:16.527 "nvme_iov_md": false 00:18:16.527 }, 00:18:16.527 "memory_domains": [ 00:18:16.527 { 00:18:16.527 "dma_device_id": "system", 00:18:16.527 "dma_device_type": 1 00:18:16.527 }, 00:18:16.527 { 00:18:16.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.527 "dma_device_type": 2 00:18:16.527 } 00:18:16.527 ], 00:18:16.527 "driver_specific": {} 00:18:16.527 } 00:18:16.527 ] 00:18:16.527 18:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:16.527 18:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:16.527 18:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:16.527 18:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:16.527 18:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:16.527 18:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:16.527 18:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:16.527 18:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:16.527 18:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:16.527 18:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:16.527 18:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:16.527 18:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.527 18:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.786 18:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:16.786 "name": "Existed_Raid", 00:18:16.786 "uuid": "46455f96-3212-41c6-97a8-cd9f683c9605", 00:18:16.786 "strip_size_kb": 64, 00:18:16.786 "state": "online", 00:18:16.786 "raid_level": "raid0", 00:18:16.786 "superblock": true, 00:18:16.786 "num_base_bdevs": 4, 00:18:16.786 "num_base_bdevs_discovered": 4, 00:18:16.786 "num_base_bdevs_operational": 4, 00:18:16.786 "base_bdevs_list": [ 00:18:16.786 { 00:18:16.786 "name": "NewBaseBdev", 00:18:16.786 "uuid": "c030ff9a-75a9-4b02-ac34-e18270be0f55", 00:18:16.786 "is_configured": true, 00:18:16.786 "data_offset": 2048, 00:18:16.786 "data_size": 63488 00:18:16.786 }, 00:18:16.786 { 00:18:16.786 "name": "BaseBdev2", 00:18:16.786 "uuid": "45eb7d2b-e0d2-4de9-9f1d-87c22e215db6", 00:18:16.786 "is_configured": true, 00:18:16.786 "data_offset": 2048, 00:18:16.786 "data_size": 63488 00:18:16.786 }, 00:18:16.786 { 00:18:16.786 "name": "BaseBdev3", 00:18:16.786 "uuid": "8e69bbb8-67f2-4960-a7e8-0fa4656b34e0", 00:18:16.786 "is_configured": true, 00:18:16.786 "data_offset": 2048, 00:18:16.786 "data_size": 63488 00:18:16.786 }, 00:18:16.786 { 00:18:16.786 "name": "BaseBdev4", 00:18:16.786 "uuid": "35ea9bee-8e72-42d8-b981-81eebbd2c0d6", 00:18:16.786 "is_configured": true, 00:18:16.786 "data_offset": 2048, 00:18:16.786 "data_size": 63488 00:18:16.786 } 00:18:16.786 ] 00:18:16.786 }' 00:18:16.786 18:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:16.786 18:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.354 18:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:17.354 18:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:17.354 18:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:17.354 18:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:17.354 18:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:17.354 18:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:17.354 18:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:17.354 18:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:17.612 [2024-07-15 18:33:03.056419] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:17.612 18:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:17.612 "name": "Existed_Raid", 00:18:17.612 "aliases": [ 00:18:17.612 "46455f96-3212-41c6-97a8-cd9f683c9605" 00:18:17.612 ], 00:18:17.612 "product_name": "Raid Volume", 00:18:17.612 "block_size": 512, 00:18:17.612 "num_blocks": 253952, 00:18:17.612 "uuid": "46455f96-3212-41c6-97a8-cd9f683c9605", 00:18:17.612 "assigned_rate_limits": { 00:18:17.612 "rw_ios_per_sec": 0, 00:18:17.612 "rw_mbytes_per_sec": 0, 00:18:17.612 "r_mbytes_per_sec": 0, 00:18:17.612 "w_mbytes_per_sec": 0 00:18:17.612 }, 00:18:17.612 "claimed": false, 00:18:17.612 "zoned": false, 00:18:17.612 "supported_io_types": { 00:18:17.612 "read": true, 00:18:17.612 "write": true, 00:18:17.612 "unmap": true, 00:18:17.612 "flush": true, 00:18:17.612 "reset": true, 00:18:17.612 "nvme_admin": false, 00:18:17.612 "nvme_io": false, 00:18:17.612 "nvme_io_md": false, 00:18:17.612 "write_zeroes": true, 00:18:17.612 "zcopy": false, 00:18:17.612 "get_zone_info": false, 00:18:17.612 "zone_management": false, 00:18:17.612 "zone_append": false, 00:18:17.612 "compare": false, 00:18:17.612 "compare_and_write": false, 00:18:17.612 "abort": false, 00:18:17.612 "seek_hole": false, 00:18:17.612 "seek_data": false, 00:18:17.612 "copy": false, 00:18:17.612 "nvme_iov_md": false 00:18:17.612 }, 00:18:17.612 "memory_domains": [ 00:18:17.612 { 00:18:17.612 "dma_device_id": "system", 00:18:17.612 "dma_device_type": 1 00:18:17.612 }, 00:18:17.612 { 00:18:17.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.612 "dma_device_type": 2 00:18:17.612 }, 00:18:17.612 { 00:18:17.612 "dma_device_id": "system", 00:18:17.612 "dma_device_type": 1 00:18:17.612 }, 00:18:17.612 { 00:18:17.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.613 "dma_device_type": 2 00:18:17.613 }, 00:18:17.613 { 00:18:17.613 "dma_device_id": "system", 00:18:17.613 "dma_device_type": 1 00:18:17.613 }, 00:18:17.613 { 00:18:17.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.613 "dma_device_type": 2 00:18:17.613 }, 00:18:17.613 { 00:18:17.613 "dma_device_id": "system", 00:18:17.613 "dma_device_type": 1 00:18:17.613 }, 00:18:17.613 { 00:18:17.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.613 "dma_device_type": 2 00:18:17.613 } 00:18:17.613 ], 00:18:17.613 "driver_specific": { 00:18:17.613 "raid": { 00:18:17.613 "uuid": "46455f96-3212-41c6-97a8-cd9f683c9605", 00:18:17.613 "strip_size_kb": 64, 00:18:17.613 "state": "online", 00:18:17.613 "raid_level": "raid0", 00:18:17.613 "superblock": true, 00:18:17.613 "num_base_bdevs": 4, 00:18:17.613 "num_base_bdevs_discovered": 4, 00:18:17.613 "num_base_bdevs_operational": 4, 00:18:17.613 "base_bdevs_list": [ 00:18:17.613 { 00:18:17.613 "name": "NewBaseBdev", 00:18:17.613 "uuid": "c030ff9a-75a9-4b02-ac34-e18270be0f55", 00:18:17.613 "is_configured": true, 00:18:17.613 "data_offset": 2048, 00:18:17.613 "data_size": 63488 00:18:17.613 }, 00:18:17.613 { 00:18:17.613 "name": "BaseBdev2", 00:18:17.613 "uuid": "45eb7d2b-e0d2-4de9-9f1d-87c22e215db6", 00:18:17.613 "is_configured": true, 00:18:17.613 "data_offset": 2048, 00:18:17.613 "data_size": 63488 00:18:17.613 }, 00:18:17.613 { 00:18:17.613 "name": "BaseBdev3", 00:18:17.613 "uuid": "8e69bbb8-67f2-4960-a7e8-0fa4656b34e0", 00:18:17.613 "is_configured": true, 00:18:17.613 "data_offset": 2048, 00:18:17.613 "data_size": 63488 00:18:17.613 }, 00:18:17.613 { 00:18:17.613 "name": "BaseBdev4", 00:18:17.613 "uuid": "35ea9bee-8e72-42d8-b981-81eebbd2c0d6", 00:18:17.613 "is_configured": true, 00:18:17.613 "data_offset": 2048, 00:18:17.613 "data_size": 63488 00:18:17.613 } 00:18:17.613 ] 00:18:17.613 } 00:18:17.613 } 00:18:17.613 }' 00:18:17.613 18:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:17.613 18:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:17.613 BaseBdev2 00:18:17.613 BaseBdev3 00:18:17.613 BaseBdev4' 00:18:17.613 18:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:17.613 18:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:17.613 18:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:17.870 18:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:17.870 "name": "NewBaseBdev", 00:18:17.870 "aliases": [ 00:18:17.870 "c030ff9a-75a9-4b02-ac34-e18270be0f55" 00:18:17.870 ], 00:18:17.870 "product_name": "Malloc disk", 00:18:17.870 "block_size": 512, 00:18:17.870 "num_blocks": 65536, 00:18:17.870 "uuid": "c030ff9a-75a9-4b02-ac34-e18270be0f55", 00:18:17.870 "assigned_rate_limits": { 00:18:17.870 "rw_ios_per_sec": 0, 00:18:17.870 "rw_mbytes_per_sec": 0, 00:18:17.870 "r_mbytes_per_sec": 0, 00:18:17.870 "w_mbytes_per_sec": 0 00:18:17.870 }, 00:18:17.870 "claimed": true, 00:18:17.870 "claim_type": "exclusive_write", 00:18:17.870 "zoned": false, 00:18:17.870 "supported_io_types": { 00:18:17.870 "read": true, 00:18:17.870 "write": true, 00:18:17.870 "unmap": true, 00:18:17.870 "flush": true, 00:18:17.870 "reset": true, 00:18:17.870 "nvme_admin": false, 00:18:17.870 "nvme_io": false, 00:18:17.870 "nvme_io_md": false, 00:18:17.870 "write_zeroes": true, 00:18:17.870 "zcopy": true, 00:18:17.870 "get_zone_info": false, 00:18:17.870 "zone_management": false, 00:18:17.870 "zone_append": false, 00:18:17.870 "compare": false, 00:18:17.870 "compare_and_write": false, 00:18:17.870 "abort": true, 00:18:17.870 "seek_hole": false, 00:18:17.870 "seek_data": false, 00:18:17.870 "copy": true, 00:18:17.870 "nvme_iov_md": false 00:18:17.870 }, 00:18:17.871 "memory_domains": [ 00:18:17.871 { 00:18:17.871 "dma_device_id": "system", 00:18:17.871 "dma_device_type": 1 00:18:17.871 }, 00:18:17.871 { 00:18:17.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.871 "dma_device_type": 2 00:18:17.871 } 00:18:17.871 ], 00:18:17.871 "driver_specific": {} 00:18:17.871 }' 00:18:17.871 18:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:18.128 18:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:18.128 18:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:18.128 18:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:18.128 18:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:18.128 18:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:18.128 18:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:18.128 18:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:18.128 18:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:18.128 18:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:18.385 18:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:18.385 18:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:18.385 18:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:18.385 18:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:18.385 18:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:18.643 18:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:18.643 "name": "BaseBdev2", 00:18:18.643 "aliases": [ 00:18:18.643 "45eb7d2b-e0d2-4de9-9f1d-87c22e215db6" 00:18:18.643 ], 00:18:18.643 "product_name": "Malloc disk", 00:18:18.643 "block_size": 512, 00:18:18.643 "num_blocks": 65536, 00:18:18.643 "uuid": "45eb7d2b-e0d2-4de9-9f1d-87c22e215db6", 00:18:18.643 "assigned_rate_limits": { 00:18:18.643 "rw_ios_per_sec": 0, 00:18:18.643 "rw_mbytes_per_sec": 0, 00:18:18.643 "r_mbytes_per_sec": 0, 00:18:18.643 "w_mbytes_per_sec": 0 00:18:18.643 }, 00:18:18.643 "claimed": true, 00:18:18.643 "claim_type": "exclusive_write", 00:18:18.643 "zoned": false, 00:18:18.643 "supported_io_types": { 00:18:18.643 "read": true, 00:18:18.643 "write": true, 00:18:18.643 "unmap": true, 00:18:18.643 "flush": true, 00:18:18.643 "reset": true, 00:18:18.643 "nvme_admin": false, 00:18:18.643 "nvme_io": false, 00:18:18.643 "nvme_io_md": false, 00:18:18.643 "write_zeroes": true, 00:18:18.643 "zcopy": true, 00:18:18.643 "get_zone_info": false, 00:18:18.643 "zone_management": false, 00:18:18.643 "zone_append": false, 00:18:18.643 "compare": false, 00:18:18.643 "compare_and_write": false, 00:18:18.643 "abort": true, 00:18:18.643 "seek_hole": false, 00:18:18.643 "seek_data": false, 00:18:18.643 "copy": true, 00:18:18.643 "nvme_iov_md": false 00:18:18.643 }, 00:18:18.643 "memory_domains": [ 00:18:18.643 { 00:18:18.643 "dma_device_id": "system", 00:18:18.643 "dma_device_type": 1 00:18:18.643 }, 00:18:18.643 { 00:18:18.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.643 "dma_device_type": 2 00:18:18.643 } 00:18:18.643 ], 00:18:18.643 "driver_specific": {} 00:18:18.644 }' 00:18:18.644 18:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:18.644 18:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:18.644 18:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:18.644 18:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:18.644 18:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:18.644 18:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:18.644 18:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:18.902 18:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:18.902 18:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:18.902 18:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:18.902 18:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:18.902 18:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:18.902 18:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:18.902 18:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:18.902 18:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:19.160 18:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:19.160 "name": "BaseBdev3", 00:18:19.160 "aliases": [ 00:18:19.160 "8e69bbb8-67f2-4960-a7e8-0fa4656b34e0" 00:18:19.160 ], 00:18:19.160 "product_name": "Malloc disk", 00:18:19.160 "block_size": 512, 00:18:19.160 "num_blocks": 65536, 00:18:19.160 "uuid": "8e69bbb8-67f2-4960-a7e8-0fa4656b34e0", 00:18:19.160 "assigned_rate_limits": { 00:18:19.160 "rw_ios_per_sec": 0, 00:18:19.160 "rw_mbytes_per_sec": 0, 00:18:19.160 "r_mbytes_per_sec": 0, 00:18:19.160 "w_mbytes_per_sec": 0 00:18:19.160 }, 00:18:19.160 "claimed": true, 00:18:19.160 "claim_type": "exclusive_write", 00:18:19.160 "zoned": false, 00:18:19.160 "supported_io_types": { 00:18:19.160 "read": true, 00:18:19.160 "write": true, 00:18:19.160 "unmap": true, 00:18:19.160 "flush": true, 00:18:19.160 "reset": true, 00:18:19.160 "nvme_admin": false, 00:18:19.160 "nvme_io": false, 00:18:19.160 "nvme_io_md": false, 00:18:19.160 "write_zeroes": true, 00:18:19.160 "zcopy": true, 00:18:19.160 "get_zone_info": false, 00:18:19.160 "zone_management": false, 00:18:19.160 "zone_append": false, 00:18:19.160 "compare": false, 00:18:19.160 "compare_and_write": false, 00:18:19.160 "abort": true, 00:18:19.160 "seek_hole": false, 00:18:19.160 "seek_data": false, 00:18:19.160 "copy": true, 00:18:19.160 "nvme_iov_md": false 00:18:19.160 }, 00:18:19.160 "memory_domains": [ 00:18:19.160 { 00:18:19.160 "dma_device_id": "system", 00:18:19.160 "dma_device_type": 1 00:18:19.160 }, 00:18:19.160 { 00:18:19.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.160 "dma_device_type": 2 00:18:19.160 } 00:18:19.160 ], 00:18:19.160 "driver_specific": {} 00:18:19.160 }' 00:18:19.160 18:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:19.160 18:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:19.418 18:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:19.418 18:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:19.418 18:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:19.418 18:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:19.418 18:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:19.418 18:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:19.418 18:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:19.418 18:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:19.676 18:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:19.676 18:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:19.676 18:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:19.676 18:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:19.676 18:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:19.934 18:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:19.934 "name": "BaseBdev4", 00:18:19.934 "aliases": [ 00:18:19.934 "35ea9bee-8e72-42d8-b981-81eebbd2c0d6" 00:18:19.934 ], 00:18:19.934 "product_name": "Malloc disk", 00:18:19.934 "block_size": 512, 00:18:19.934 "num_blocks": 65536, 00:18:19.934 "uuid": "35ea9bee-8e72-42d8-b981-81eebbd2c0d6", 00:18:19.934 "assigned_rate_limits": { 00:18:19.934 "rw_ios_per_sec": 0, 00:18:19.934 "rw_mbytes_per_sec": 0, 00:18:19.934 "r_mbytes_per_sec": 0, 00:18:19.934 "w_mbytes_per_sec": 0 00:18:19.934 }, 00:18:19.934 "claimed": true, 00:18:19.934 "claim_type": "exclusive_write", 00:18:19.934 "zoned": false, 00:18:19.934 "supported_io_types": { 00:18:19.934 "read": true, 00:18:19.934 "write": true, 00:18:19.934 "unmap": true, 00:18:19.934 "flush": true, 00:18:19.934 "reset": true, 00:18:19.934 "nvme_admin": false, 00:18:19.934 "nvme_io": false, 00:18:19.934 "nvme_io_md": false, 00:18:19.934 "write_zeroes": true, 00:18:19.934 "zcopy": true, 00:18:19.934 "get_zone_info": false, 00:18:19.934 "zone_management": false, 00:18:19.934 "zone_append": false, 00:18:19.934 "compare": false, 00:18:19.934 "compare_and_write": false, 00:18:19.934 "abort": true, 00:18:19.934 "seek_hole": false, 00:18:19.934 "seek_data": false, 00:18:19.934 "copy": true, 00:18:19.934 "nvme_iov_md": false 00:18:19.934 }, 00:18:19.934 "memory_domains": [ 00:18:19.934 { 00:18:19.934 "dma_device_id": "system", 00:18:19.934 "dma_device_type": 1 00:18:19.934 }, 00:18:19.934 { 00:18:19.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.934 "dma_device_type": 2 00:18:19.934 } 00:18:19.934 ], 00:18:19.934 "driver_specific": {} 00:18:19.934 }' 00:18:19.934 18:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:19.934 18:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:19.934 18:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:19.934 18:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:19.934 18:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:19.934 18:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:19.934 18:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:20.191 18:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:20.191 18:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:20.191 18:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:20.191 18:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:20.191 18:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:20.191 18:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:20.451 [2024-07-15 18:33:05.875703] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:20.451 [2024-07-15 18:33:05.875727] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:20.451 [2024-07-15 18:33:05.875775] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:20.451 [2024-07-15 18:33:05.875834] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:20.451 [2024-07-15 18:33:05.875843] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd40de0 name Existed_Raid, state offline 00:18:20.451 18:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2837871 00:18:20.451 18:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2837871 ']' 00:18:20.451 18:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2837871 00:18:20.451 18:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:18:20.451 18:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:20.451 18:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2837871 00:18:20.451 18:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:20.451 18:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:20.451 18:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2837871' 00:18:20.451 killing process with pid 2837871 00:18:20.451 18:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2837871 00:18:20.451 [2024-07-15 18:33:05.939257] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:20.451 18:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2837871 00:18:20.451 [2024-07-15 18:33:05.975101] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:20.713 18:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:18:20.713 00:18:20.713 real 0m33.789s 00:18:20.713 user 1m3.464s 00:18:20.713 sys 0m4.661s 00:18:20.713 18:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:20.713 18:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.713 ************************************ 00:18:20.713 END TEST raid_state_function_test_sb 00:18:20.713 ************************************ 00:18:20.713 18:33:06 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:20.713 18:33:06 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:18:20.713 18:33:06 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:20.713 18:33:06 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:20.713 18:33:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:20.713 ************************************ 00:18:20.713 START TEST raid_superblock_test 00:18:20.713 ************************************ 00:18:20.713 18:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 4 00:18:20.713 18:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:18:20.713 18:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:18:20.713 18:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:18:20.713 18:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:18:20.713 18:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:18:20.713 18:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:18:20.713 18:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:18:20.713 18:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:18:20.713 18:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:18:20.713 18:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:18:20.713 18:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:18:20.713 18:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:18:20.713 18:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:18:20.713 18:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:18:20.713 18:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:18:20.713 18:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:18:20.713 18:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2843968 00:18:20.713 18:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2843968 /var/tmp/spdk-raid.sock 00:18:20.713 18:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:20.713 18:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2843968 ']' 00:18:20.713 18:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:20.713 18:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:20.713 18:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:20.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:20.713 18:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:20.713 18:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.972 [2024-07-15 18:33:06.274182] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:18:20.972 [2024-07-15 18:33:06.274242] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2843968 ] 00:18:20.972 [2024-07-15 18:33:06.372263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.972 [2024-07-15 18:33:06.467042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.230 [2024-07-15 18:33:06.528051] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:21.230 [2024-07-15 18:33:06.528082] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:21.796 18:33:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:21.796 18:33:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:18:21.796 18:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:18:21.797 18:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:21.797 18:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:18:21.797 18:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:18:21.797 18:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:21.797 18:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:21.797 18:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:21.797 18:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:21.797 18:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:22.055 malloc1 00:18:22.055 18:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:22.314 [2024-07-15 18:33:07.717309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:22.314 [2024-07-15 18:33:07.717355] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.314 [2024-07-15 18:33:07.717372] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1928e20 00:18:22.314 [2024-07-15 18:33:07.717381] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.314 [2024-07-15 18:33:07.719126] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.314 [2024-07-15 18:33:07.719154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:22.314 pt1 00:18:22.314 18:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:22.314 18:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:22.314 18:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:18:22.314 18:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:18:22.314 18:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:22.314 18:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:22.314 18:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:22.314 18:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:22.314 18:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:22.573 malloc2 00:18:22.573 18:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:22.833 [2024-07-15 18:33:08.231489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:22.833 [2024-07-15 18:33:08.231534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.833 [2024-07-15 18:33:08.231548] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ad2ed0 00:18:22.833 [2024-07-15 18:33:08.231558] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.833 [2024-07-15 18:33:08.233153] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.833 [2024-07-15 18:33:08.233182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:22.833 pt2 00:18:22.833 18:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:22.833 18:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:22.833 18:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:18:22.833 18:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:18:22.833 18:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:22.833 18:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:22.833 18:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:22.833 18:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:22.833 18:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:23.092 malloc3 00:18:23.092 18:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:23.351 [2024-07-15 18:33:08.733376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:23.351 [2024-07-15 18:33:08.733419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.351 [2024-07-15 18:33:08.733435] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ad6a30 00:18:23.351 [2024-07-15 18:33:08.733444] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.351 [2024-07-15 18:33:08.735019] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.351 [2024-07-15 18:33:08.735046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:23.351 pt3 00:18:23.351 18:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:23.351 18:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:23.351 18:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:18:23.352 18:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:18:23.352 18:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:23.352 18:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:23.352 18:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:23.352 18:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:23.352 18:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:18:23.610 malloc4 00:18:23.610 18:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:23.869 [2024-07-15 18:33:09.251196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:23.869 [2024-07-15 18:33:09.251242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.869 [2024-07-15 18:33:09.251257] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ad3900 00:18:23.869 [2024-07-15 18:33:09.251266] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.869 [2024-07-15 18:33:09.252882] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.869 [2024-07-15 18:33:09.252908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:23.869 pt4 00:18:23.869 18:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:23.869 18:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:23.869 18:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:18:24.128 [2024-07-15 18:33:09.503883] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:24.128 [2024-07-15 18:33:09.505259] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:24.128 [2024-07-15 18:33:09.505320] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:24.128 [2024-07-15 18:33:09.505367] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:24.128 [2024-07-15 18:33:09.505540] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1ad6d40 00:18:24.128 [2024-07-15 18:33:09.505550] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:24.128 [2024-07-15 18:33:09.505756] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1adb140 00:18:24.128 [2024-07-15 18:33:09.505908] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1ad6d40 00:18:24.128 [2024-07-15 18:33:09.505917] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1ad6d40 00:18:24.128 [2024-07-15 18:33:09.506037] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:24.128 18:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:24.128 18:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:24.128 18:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:24.128 18:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:24.128 18:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:24.128 18:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:24.128 18:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:24.128 18:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:24.128 18:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:24.128 18:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:24.128 18:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.128 18:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.696 18:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:24.696 "name": "raid_bdev1", 00:18:24.696 "uuid": "d84a96f4-71d2-4132-99d1-100691eca27d", 00:18:24.696 "strip_size_kb": 64, 00:18:24.696 "state": "online", 00:18:24.696 "raid_level": "raid0", 00:18:24.696 "superblock": true, 00:18:24.696 "num_base_bdevs": 4, 00:18:24.696 "num_base_bdevs_discovered": 4, 00:18:24.696 "num_base_bdevs_operational": 4, 00:18:24.696 "base_bdevs_list": [ 00:18:24.696 { 00:18:24.696 "name": "pt1", 00:18:24.696 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:24.696 "is_configured": true, 00:18:24.696 "data_offset": 2048, 00:18:24.696 "data_size": 63488 00:18:24.696 }, 00:18:24.696 { 00:18:24.696 "name": "pt2", 00:18:24.696 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:24.696 "is_configured": true, 00:18:24.696 "data_offset": 2048, 00:18:24.696 "data_size": 63488 00:18:24.696 }, 00:18:24.696 { 00:18:24.696 "name": "pt3", 00:18:24.696 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:24.696 "is_configured": true, 00:18:24.696 "data_offset": 2048, 00:18:24.696 "data_size": 63488 00:18:24.696 }, 00:18:24.696 { 00:18:24.696 "name": "pt4", 00:18:24.696 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:24.696 "is_configured": true, 00:18:24.696 "data_offset": 2048, 00:18:24.696 "data_size": 63488 00:18:24.696 } 00:18:24.696 ] 00:18:24.696 }' 00:18:24.696 18:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:24.696 18:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.263 18:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:18:25.263 18:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:25.263 18:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:25.263 18:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:25.263 18:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:25.263 18:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:25.263 18:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:25.263 18:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:25.522 [2024-07-15 18:33:10.867847] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:25.522 18:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:25.522 "name": "raid_bdev1", 00:18:25.522 "aliases": [ 00:18:25.522 "d84a96f4-71d2-4132-99d1-100691eca27d" 00:18:25.522 ], 00:18:25.522 "product_name": "Raid Volume", 00:18:25.522 "block_size": 512, 00:18:25.522 "num_blocks": 253952, 00:18:25.522 "uuid": "d84a96f4-71d2-4132-99d1-100691eca27d", 00:18:25.522 "assigned_rate_limits": { 00:18:25.522 "rw_ios_per_sec": 0, 00:18:25.522 "rw_mbytes_per_sec": 0, 00:18:25.522 "r_mbytes_per_sec": 0, 00:18:25.522 "w_mbytes_per_sec": 0 00:18:25.522 }, 00:18:25.522 "claimed": false, 00:18:25.522 "zoned": false, 00:18:25.522 "supported_io_types": { 00:18:25.522 "read": true, 00:18:25.522 "write": true, 00:18:25.522 "unmap": true, 00:18:25.522 "flush": true, 00:18:25.522 "reset": true, 00:18:25.522 "nvme_admin": false, 00:18:25.522 "nvme_io": false, 00:18:25.522 "nvme_io_md": false, 00:18:25.522 "write_zeroes": true, 00:18:25.522 "zcopy": false, 00:18:25.522 "get_zone_info": false, 00:18:25.522 "zone_management": false, 00:18:25.522 "zone_append": false, 00:18:25.522 "compare": false, 00:18:25.522 "compare_and_write": false, 00:18:25.522 "abort": false, 00:18:25.522 "seek_hole": false, 00:18:25.522 "seek_data": false, 00:18:25.522 "copy": false, 00:18:25.522 "nvme_iov_md": false 00:18:25.522 }, 00:18:25.522 "memory_domains": [ 00:18:25.522 { 00:18:25.522 "dma_device_id": "system", 00:18:25.522 "dma_device_type": 1 00:18:25.522 }, 00:18:25.522 { 00:18:25.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.522 "dma_device_type": 2 00:18:25.522 }, 00:18:25.522 { 00:18:25.522 "dma_device_id": "system", 00:18:25.522 "dma_device_type": 1 00:18:25.522 }, 00:18:25.522 { 00:18:25.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.522 "dma_device_type": 2 00:18:25.522 }, 00:18:25.522 { 00:18:25.522 "dma_device_id": "system", 00:18:25.522 "dma_device_type": 1 00:18:25.522 }, 00:18:25.522 { 00:18:25.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.522 "dma_device_type": 2 00:18:25.522 }, 00:18:25.522 { 00:18:25.522 "dma_device_id": "system", 00:18:25.522 "dma_device_type": 1 00:18:25.522 }, 00:18:25.522 { 00:18:25.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.522 "dma_device_type": 2 00:18:25.522 } 00:18:25.522 ], 00:18:25.522 "driver_specific": { 00:18:25.522 "raid": { 00:18:25.522 "uuid": "d84a96f4-71d2-4132-99d1-100691eca27d", 00:18:25.522 "strip_size_kb": 64, 00:18:25.522 "state": "online", 00:18:25.522 "raid_level": "raid0", 00:18:25.522 "superblock": true, 00:18:25.522 "num_base_bdevs": 4, 00:18:25.522 "num_base_bdevs_discovered": 4, 00:18:25.522 "num_base_bdevs_operational": 4, 00:18:25.522 "base_bdevs_list": [ 00:18:25.522 { 00:18:25.522 "name": "pt1", 00:18:25.522 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:25.522 "is_configured": true, 00:18:25.522 "data_offset": 2048, 00:18:25.522 "data_size": 63488 00:18:25.522 }, 00:18:25.522 { 00:18:25.522 "name": "pt2", 00:18:25.523 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:25.523 "is_configured": true, 00:18:25.523 "data_offset": 2048, 00:18:25.523 "data_size": 63488 00:18:25.523 }, 00:18:25.523 { 00:18:25.523 "name": "pt3", 00:18:25.523 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:25.523 "is_configured": true, 00:18:25.523 "data_offset": 2048, 00:18:25.523 "data_size": 63488 00:18:25.523 }, 00:18:25.523 { 00:18:25.523 "name": "pt4", 00:18:25.523 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:25.523 "is_configured": true, 00:18:25.523 "data_offset": 2048, 00:18:25.523 "data_size": 63488 00:18:25.523 } 00:18:25.523 ] 00:18:25.523 } 00:18:25.523 } 00:18:25.523 }' 00:18:25.523 18:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:25.523 18:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:25.523 pt2 00:18:25.523 pt3 00:18:25.523 pt4' 00:18:25.523 18:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:25.523 18:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:25.523 18:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:25.782 18:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:25.782 "name": "pt1", 00:18:25.782 "aliases": [ 00:18:25.782 "00000000-0000-0000-0000-000000000001" 00:18:25.782 ], 00:18:25.782 "product_name": "passthru", 00:18:25.782 "block_size": 512, 00:18:25.782 "num_blocks": 65536, 00:18:25.782 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:25.782 "assigned_rate_limits": { 00:18:25.782 "rw_ios_per_sec": 0, 00:18:25.782 "rw_mbytes_per_sec": 0, 00:18:25.782 "r_mbytes_per_sec": 0, 00:18:25.782 "w_mbytes_per_sec": 0 00:18:25.782 }, 00:18:25.782 "claimed": true, 00:18:25.782 "claim_type": "exclusive_write", 00:18:25.782 "zoned": false, 00:18:25.782 "supported_io_types": { 00:18:25.782 "read": true, 00:18:25.782 "write": true, 00:18:25.782 "unmap": true, 00:18:25.782 "flush": true, 00:18:25.782 "reset": true, 00:18:25.782 "nvme_admin": false, 00:18:25.782 "nvme_io": false, 00:18:25.782 "nvme_io_md": false, 00:18:25.782 "write_zeroes": true, 00:18:25.782 "zcopy": true, 00:18:25.782 "get_zone_info": false, 00:18:25.782 "zone_management": false, 00:18:25.782 "zone_append": false, 00:18:25.782 "compare": false, 00:18:25.782 "compare_and_write": false, 00:18:25.782 "abort": true, 00:18:25.782 "seek_hole": false, 00:18:25.782 "seek_data": false, 00:18:25.782 "copy": true, 00:18:25.782 "nvme_iov_md": false 00:18:25.782 }, 00:18:25.782 "memory_domains": [ 00:18:25.782 { 00:18:25.782 "dma_device_id": "system", 00:18:25.782 "dma_device_type": 1 00:18:25.782 }, 00:18:25.782 { 00:18:25.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.782 "dma_device_type": 2 00:18:25.782 } 00:18:25.782 ], 00:18:25.782 "driver_specific": { 00:18:25.782 "passthru": { 00:18:25.782 "name": "pt1", 00:18:25.782 "base_bdev_name": "malloc1" 00:18:25.782 } 00:18:25.782 } 00:18:25.782 }' 00:18:25.782 18:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:25.782 18:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:25.782 18:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:25.782 18:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:26.041 18:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:26.041 18:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:26.041 18:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:26.041 18:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:26.041 18:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:26.041 18:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:26.041 18:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:26.041 18:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:26.041 18:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:26.041 18:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:26.041 18:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:26.300 18:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:26.300 "name": "pt2", 00:18:26.300 "aliases": [ 00:18:26.300 "00000000-0000-0000-0000-000000000002" 00:18:26.300 ], 00:18:26.300 "product_name": "passthru", 00:18:26.300 "block_size": 512, 00:18:26.300 "num_blocks": 65536, 00:18:26.300 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:26.300 "assigned_rate_limits": { 00:18:26.300 "rw_ios_per_sec": 0, 00:18:26.300 "rw_mbytes_per_sec": 0, 00:18:26.300 "r_mbytes_per_sec": 0, 00:18:26.300 "w_mbytes_per_sec": 0 00:18:26.300 }, 00:18:26.300 "claimed": true, 00:18:26.300 "claim_type": "exclusive_write", 00:18:26.300 "zoned": false, 00:18:26.300 "supported_io_types": { 00:18:26.300 "read": true, 00:18:26.300 "write": true, 00:18:26.300 "unmap": true, 00:18:26.300 "flush": true, 00:18:26.300 "reset": true, 00:18:26.300 "nvme_admin": false, 00:18:26.300 "nvme_io": false, 00:18:26.300 "nvme_io_md": false, 00:18:26.300 "write_zeroes": true, 00:18:26.300 "zcopy": true, 00:18:26.300 "get_zone_info": false, 00:18:26.300 "zone_management": false, 00:18:26.300 "zone_append": false, 00:18:26.300 "compare": false, 00:18:26.300 "compare_and_write": false, 00:18:26.300 "abort": true, 00:18:26.300 "seek_hole": false, 00:18:26.300 "seek_data": false, 00:18:26.300 "copy": true, 00:18:26.300 "nvme_iov_md": false 00:18:26.300 }, 00:18:26.300 "memory_domains": [ 00:18:26.300 { 00:18:26.300 "dma_device_id": "system", 00:18:26.300 "dma_device_type": 1 00:18:26.300 }, 00:18:26.300 { 00:18:26.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.300 "dma_device_type": 2 00:18:26.300 } 00:18:26.300 ], 00:18:26.300 "driver_specific": { 00:18:26.300 "passthru": { 00:18:26.300 "name": "pt2", 00:18:26.300 "base_bdev_name": "malloc2" 00:18:26.300 } 00:18:26.300 } 00:18:26.300 }' 00:18:26.300 18:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:26.559 18:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:26.559 18:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:26.559 18:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:26.559 18:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:26.559 18:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:26.559 18:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:26.559 18:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:26.559 18:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:26.559 18:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:26.818 18:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:26.818 18:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:26.818 18:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:26.818 18:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:26.818 18:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:27.076 18:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:27.076 "name": "pt3", 00:18:27.076 "aliases": [ 00:18:27.076 "00000000-0000-0000-0000-000000000003" 00:18:27.076 ], 00:18:27.076 "product_name": "passthru", 00:18:27.076 "block_size": 512, 00:18:27.076 "num_blocks": 65536, 00:18:27.076 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:27.076 "assigned_rate_limits": { 00:18:27.076 "rw_ios_per_sec": 0, 00:18:27.076 "rw_mbytes_per_sec": 0, 00:18:27.076 "r_mbytes_per_sec": 0, 00:18:27.076 "w_mbytes_per_sec": 0 00:18:27.076 }, 00:18:27.076 "claimed": true, 00:18:27.077 "claim_type": "exclusive_write", 00:18:27.077 "zoned": false, 00:18:27.077 "supported_io_types": { 00:18:27.077 "read": true, 00:18:27.077 "write": true, 00:18:27.077 "unmap": true, 00:18:27.077 "flush": true, 00:18:27.077 "reset": true, 00:18:27.077 "nvme_admin": false, 00:18:27.077 "nvme_io": false, 00:18:27.077 "nvme_io_md": false, 00:18:27.077 "write_zeroes": true, 00:18:27.077 "zcopy": true, 00:18:27.077 "get_zone_info": false, 00:18:27.077 "zone_management": false, 00:18:27.077 "zone_append": false, 00:18:27.077 "compare": false, 00:18:27.077 "compare_and_write": false, 00:18:27.077 "abort": true, 00:18:27.077 "seek_hole": false, 00:18:27.077 "seek_data": false, 00:18:27.077 "copy": true, 00:18:27.077 "nvme_iov_md": false 00:18:27.077 }, 00:18:27.077 "memory_domains": [ 00:18:27.077 { 00:18:27.077 "dma_device_id": "system", 00:18:27.077 "dma_device_type": 1 00:18:27.077 }, 00:18:27.077 { 00:18:27.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.077 "dma_device_type": 2 00:18:27.077 } 00:18:27.077 ], 00:18:27.077 "driver_specific": { 00:18:27.077 "passthru": { 00:18:27.077 "name": "pt3", 00:18:27.077 "base_bdev_name": "malloc3" 00:18:27.077 } 00:18:27.077 } 00:18:27.077 }' 00:18:27.077 18:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:27.077 18:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:27.077 18:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:27.077 18:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:27.077 18:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:27.077 18:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:27.077 18:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:27.335 18:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:27.335 18:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:27.335 18:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:27.335 18:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:27.335 18:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:27.335 18:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:27.335 18:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:18:27.335 18:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:27.594 18:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:27.594 "name": "pt4", 00:18:27.594 "aliases": [ 00:18:27.594 "00000000-0000-0000-0000-000000000004" 00:18:27.594 ], 00:18:27.594 "product_name": "passthru", 00:18:27.594 "block_size": 512, 00:18:27.594 "num_blocks": 65536, 00:18:27.594 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:27.594 "assigned_rate_limits": { 00:18:27.594 "rw_ios_per_sec": 0, 00:18:27.594 "rw_mbytes_per_sec": 0, 00:18:27.594 "r_mbytes_per_sec": 0, 00:18:27.594 "w_mbytes_per_sec": 0 00:18:27.594 }, 00:18:27.594 "claimed": true, 00:18:27.594 "claim_type": "exclusive_write", 00:18:27.594 "zoned": false, 00:18:27.594 "supported_io_types": { 00:18:27.594 "read": true, 00:18:27.594 "write": true, 00:18:27.594 "unmap": true, 00:18:27.594 "flush": true, 00:18:27.594 "reset": true, 00:18:27.594 "nvme_admin": false, 00:18:27.594 "nvme_io": false, 00:18:27.594 "nvme_io_md": false, 00:18:27.594 "write_zeroes": true, 00:18:27.594 "zcopy": true, 00:18:27.594 "get_zone_info": false, 00:18:27.594 "zone_management": false, 00:18:27.594 "zone_append": false, 00:18:27.594 "compare": false, 00:18:27.594 "compare_and_write": false, 00:18:27.594 "abort": true, 00:18:27.594 "seek_hole": false, 00:18:27.594 "seek_data": false, 00:18:27.594 "copy": true, 00:18:27.594 "nvme_iov_md": false 00:18:27.594 }, 00:18:27.594 "memory_domains": [ 00:18:27.594 { 00:18:27.594 "dma_device_id": "system", 00:18:27.594 "dma_device_type": 1 00:18:27.594 }, 00:18:27.594 { 00:18:27.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.594 "dma_device_type": 2 00:18:27.594 } 00:18:27.594 ], 00:18:27.594 "driver_specific": { 00:18:27.594 "passthru": { 00:18:27.594 "name": "pt4", 00:18:27.594 "base_bdev_name": "malloc4" 00:18:27.594 } 00:18:27.594 } 00:18:27.594 }' 00:18:27.594 18:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:27.853 18:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:27.853 18:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:27.853 18:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:27.853 18:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:27.853 18:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:27.853 18:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:27.853 18:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:28.111 18:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:28.111 18:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:28.111 18:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:28.111 18:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:28.111 18:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:28.111 18:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:18:28.369 [2024-07-15 18:33:13.863946] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:28.369 18:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=d84a96f4-71d2-4132-99d1-100691eca27d 00:18:28.369 18:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z d84a96f4-71d2-4132-99d1-100691eca27d ']' 00:18:28.369 18:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:28.627 [2024-07-15 18:33:14.120294] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:28.627 [2024-07-15 18:33:14.120311] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:28.627 [2024-07-15 18:33:14.120355] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:28.627 [2024-07-15 18:33:14.120414] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:28.627 [2024-07-15 18:33:14.120423] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ad6d40 name raid_bdev1, state offline 00:18:28.627 18:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.627 18:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:18:28.886 18:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:18:28.886 18:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:18:28.886 18:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:28.886 18:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:29.145 18:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:29.145 18:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:29.403 18:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:29.404 18:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:29.662 18:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:29.662 18:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:29.922 18:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:29.922 18:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:30.181 18:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:18:30.181 18:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:30.181 18:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:18:30.181 18:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:30.181 18:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:30.181 18:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:30.181 18:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:30.181 18:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:30.181 18:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:30.181 18:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:30.181 18:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:30.181 18:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:18:30.181 18:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:30.749 [2024-07-15 18:33:16.145629] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:30.749 [2024-07-15 18:33:16.147043] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:30.749 [2024-07-15 18:33:16.147088] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:30.749 [2024-07-15 18:33:16.147123] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:30.749 [2024-07-15 18:33:16.147166] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:30.749 [2024-07-15 18:33:16.147199] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:30.749 [2024-07-15 18:33:16.147219] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:30.749 [2024-07-15 18:33:16.147238] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:18:30.749 [2024-07-15 18:33:16.147252] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:30.749 [2024-07-15 18:33:16.147259] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1928490 name raid_bdev1, state configuring 00:18:30.749 request: 00:18:30.749 { 00:18:30.749 "name": "raid_bdev1", 00:18:30.749 "raid_level": "raid0", 00:18:30.749 "base_bdevs": [ 00:18:30.749 "malloc1", 00:18:30.749 "malloc2", 00:18:30.749 "malloc3", 00:18:30.749 "malloc4" 00:18:30.749 ], 00:18:30.749 "strip_size_kb": 64, 00:18:30.749 "superblock": false, 00:18:30.749 "method": "bdev_raid_create", 00:18:30.749 "req_id": 1 00:18:30.749 } 00:18:30.749 Got JSON-RPC error response 00:18:30.749 response: 00:18:30.749 { 00:18:30.749 "code": -17, 00:18:30.749 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:30.749 } 00:18:30.749 18:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:18:30.749 18:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:30.749 18:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:30.749 18:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:30.749 18:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.749 18:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:18:31.008 18:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:18:31.008 18:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:18:31.008 18:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:31.267 [2024-07-15 18:33:16.666964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:31.267 [2024-07-15 18:33:16.667011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.267 [2024-07-15 18:33:16.667028] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ad3100 00:18:31.267 [2024-07-15 18:33:16.667038] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.267 [2024-07-15 18:33:16.668707] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.267 [2024-07-15 18:33:16.668735] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:31.267 [2024-07-15 18:33:16.668805] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:31.267 [2024-07-15 18:33:16.668831] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:31.267 pt1 00:18:31.267 18:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:18:31.267 18:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:31.267 18:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:31.267 18:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:31.267 18:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:31.267 18:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:31.267 18:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:31.267 18:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:31.267 18:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:31.267 18:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:31.267 18:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.267 18:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.525 18:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:31.525 "name": "raid_bdev1", 00:18:31.525 "uuid": "d84a96f4-71d2-4132-99d1-100691eca27d", 00:18:31.525 "strip_size_kb": 64, 00:18:31.525 "state": "configuring", 00:18:31.525 "raid_level": "raid0", 00:18:31.525 "superblock": true, 00:18:31.525 "num_base_bdevs": 4, 00:18:31.525 "num_base_bdevs_discovered": 1, 00:18:31.525 "num_base_bdevs_operational": 4, 00:18:31.525 "base_bdevs_list": [ 00:18:31.525 { 00:18:31.525 "name": "pt1", 00:18:31.525 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:31.525 "is_configured": true, 00:18:31.525 "data_offset": 2048, 00:18:31.525 "data_size": 63488 00:18:31.525 }, 00:18:31.525 { 00:18:31.525 "name": null, 00:18:31.525 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:31.525 "is_configured": false, 00:18:31.525 "data_offset": 2048, 00:18:31.525 "data_size": 63488 00:18:31.525 }, 00:18:31.525 { 00:18:31.525 "name": null, 00:18:31.525 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:31.525 "is_configured": false, 00:18:31.525 "data_offset": 2048, 00:18:31.525 "data_size": 63488 00:18:31.525 }, 00:18:31.525 { 00:18:31.525 "name": null, 00:18:31.525 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:31.525 "is_configured": false, 00:18:31.525 "data_offset": 2048, 00:18:31.525 "data_size": 63488 00:18:31.525 } 00:18:31.525 ] 00:18:31.525 }' 00:18:31.525 18:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:31.525 18:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.457 18:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:18:32.457 18:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:32.457 [2024-07-15 18:33:17.886253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:32.457 [2024-07-15 18:33:17.886303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:32.457 [2024-07-15 18:33:17.886319] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ad8070 00:18:32.457 [2024-07-15 18:33:17.886328] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:32.457 [2024-07-15 18:33:17.886672] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:32.457 [2024-07-15 18:33:17.886689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:32.457 [2024-07-15 18:33:17.886748] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:32.457 [2024-07-15 18:33:17.886766] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:32.457 pt2 00:18:32.457 18:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:32.715 [2024-07-15 18:33:18.138939] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:32.715 18:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:18:32.715 18:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:32.715 18:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:32.715 18:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:32.715 18:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:32.715 18:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:32.715 18:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:32.715 18:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:32.715 18:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:32.715 18:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:32.715 18:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.715 18:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.973 18:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:32.973 "name": "raid_bdev1", 00:18:32.973 "uuid": "d84a96f4-71d2-4132-99d1-100691eca27d", 00:18:32.973 "strip_size_kb": 64, 00:18:32.973 "state": "configuring", 00:18:32.973 "raid_level": "raid0", 00:18:32.973 "superblock": true, 00:18:32.973 "num_base_bdevs": 4, 00:18:32.973 "num_base_bdevs_discovered": 1, 00:18:32.973 "num_base_bdevs_operational": 4, 00:18:32.973 "base_bdevs_list": [ 00:18:32.973 { 00:18:32.973 "name": "pt1", 00:18:32.973 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:32.973 "is_configured": true, 00:18:32.973 "data_offset": 2048, 00:18:32.973 "data_size": 63488 00:18:32.973 }, 00:18:32.973 { 00:18:32.973 "name": null, 00:18:32.973 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:32.973 "is_configured": false, 00:18:32.973 "data_offset": 2048, 00:18:32.973 "data_size": 63488 00:18:32.973 }, 00:18:32.973 { 00:18:32.973 "name": null, 00:18:32.973 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:32.973 "is_configured": false, 00:18:32.973 "data_offset": 2048, 00:18:32.973 "data_size": 63488 00:18:32.973 }, 00:18:32.973 { 00:18:32.973 "name": null, 00:18:32.973 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:32.973 "is_configured": false, 00:18:32.973 "data_offset": 2048, 00:18:32.973 "data_size": 63488 00:18:32.973 } 00:18:32.973 ] 00:18:32.973 }' 00:18:32.973 18:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:32.973 18:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.539 18:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:18:33.539 18:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:33.539 18:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:33.797 [2024-07-15 18:33:19.274002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:33.797 [2024-07-15 18:33:19.274049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.797 [2024-07-15 18:33:19.274066] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ad5be0 00:18:33.797 [2024-07-15 18:33:19.274075] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.797 [2024-07-15 18:33:19.274402] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.797 [2024-07-15 18:33:19.274420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:33.797 [2024-07-15 18:33:19.274477] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:33.797 [2024-07-15 18:33:19.274495] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:33.797 pt2 00:18:33.797 18:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:33.797 18:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:33.797 18:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:34.055 [2024-07-15 18:33:19.530707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:34.055 [2024-07-15 18:33:19.530744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.055 [2024-07-15 18:33:19.530758] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ad5030 00:18:34.055 [2024-07-15 18:33:19.530767] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.055 [2024-07-15 18:33:19.531073] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.055 [2024-07-15 18:33:19.531088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:34.055 [2024-07-15 18:33:19.531139] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:34.055 [2024-07-15 18:33:19.531156] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:34.055 pt3 00:18:34.055 18:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:34.055 18:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:34.055 18:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:34.313 [2024-07-15 18:33:19.791392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:34.313 [2024-07-15 18:33:19.791425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.313 [2024-07-15 18:33:19.791438] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1927c40 00:18:34.313 [2024-07-15 18:33:19.791447] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.313 [2024-07-15 18:33:19.791740] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.313 [2024-07-15 18:33:19.791755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:34.313 [2024-07-15 18:33:19.791805] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:34.313 [2024-07-15 18:33:19.791822] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:34.313 [2024-07-15 18:33:19.791943] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1ad8390 00:18:34.313 [2024-07-15 18:33:19.791961] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:34.313 [2024-07-15 18:33:19.792144] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ad6010 00:18:34.313 [2024-07-15 18:33:19.792278] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1ad8390 00:18:34.313 [2024-07-15 18:33:19.792286] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1ad8390 00:18:34.313 [2024-07-15 18:33:19.792383] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.313 pt4 00:18:34.313 18:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:34.313 18:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:34.313 18:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:34.313 18:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:34.313 18:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:34.313 18:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:34.313 18:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:34.313 18:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:34.313 18:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:34.313 18:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:34.313 18:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:34.313 18:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:34.313 18:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.313 18:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.572 18:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:34.572 "name": "raid_bdev1", 00:18:34.572 "uuid": "d84a96f4-71d2-4132-99d1-100691eca27d", 00:18:34.572 "strip_size_kb": 64, 00:18:34.572 "state": "online", 00:18:34.572 "raid_level": "raid0", 00:18:34.572 "superblock": true, 00:18:34.572 "num_base_bdevs": 4, 00:18:34.572 "num_base_bdevs_discovered": 4, 00:18:34.572 "num_base_bdevs_operational": 4, 00:18:34.572 "base_bdevs_list": [ 00:18:34.572 { 00:18:34.572 "name": "pt1", 00:18:34.572 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:34.572 "is_configured": true, 00:18:34.572 "data_offset": 2048, 00:18:34.572 "data_size": 63488 00:18:34.572 }, 00:18:34.572 { 00:18:34.572 "name": "pt2", 00:18:34.572 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:34.572 "is_configured": true, 00:18:34.572 "data_offset": 2048, 00:18:34.572 "data_size": 63488 00:18:34.572 }, 00:18:34.572 { 00:18:34.572 "name": "pt3", 00:18:34.572 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:34.572 "is_configured": true, 00:18:34.572 "data_offset": 2048, 00:18:34.572 "data_size": 63488 00:18:34.572 }, 00:18:34.572 { 00:18:34.572 "name": "pt4", 00:18:34.572 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:34.572 "is_configured": true, 00:18:34.572 "data_offset": 2048, 00:18:34.572 "data_size": 63488 00:18:34.572 } 00:18:34.572 ] 00:18:34.572 }' 00:18:34.572 18:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:34.572 18:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.138 18:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:18:35.138 18:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:35.138 18:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:35.138 18:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:35.138 18:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:35.138 18:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:35.138 18:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:35.138 18:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:35.397 [2024-07-15 18:33:20.882659] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:35.397 18:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:35.397 "name": "raid_bdev1", 00:18:35.397 "aliases": [ 00:18:35.397 "d84a96f4-71d2-4132-99d1-100691eca27d" 00:18:35.397 ], 00:18:35.397 "product_name": "Raid Volume", 00:18:35.397 "block_size": 512, 00:18:35.397 "num_blocks": 253952, 00:18:35.397 "uuid": "d84a96f4-71d2-4132-99d1-100691eca27d", 00:18:35.397 "assigned_rate_limits": { 00:18:35.397 "rw_ios_per_sec": 0, 00:18:35.397 "rw_mbytes_per_sec": 0, 00:18:35.397 "r_mbytes_per_sec": 0, 00:18:35.397 "w_mbytes_per_sec": 0 00:18:35.397 }, 00:18:35.397 "claimed": false, 00:18:35.397 "zoned": false, 00:18:35.397 "supported_io_types": { 00:18:35.397 "read": true, 00:18:35.397 "write": true, 00:18:35.397 "unmap": true, 00:18:35.397 "flush": true, 00:18:35.397 "reset": true, 00:18:35.397 "nvme_admin": false, 00:18:35.397 "nvme_io": false, 00:18:35.397 "nvme_io_md": false, 00:18:35.397 "write_zeroes": true, 00:18:35.397 "zcopy": false, 00:18:35.397 "get_zone_info": false, 00:18:35.397 "zone_management": false, 00:18:35.397 "zone_append": false, 00:18:35.397 "compare": false, 00:18:35.397 "compare_and_write": false, 00:18:35.397 "abort": false, 00:18:35.397 "seek_hole": false, 00:18:35.397 "seek_data": false, 00:18:35.397 "copy": false, 00:18:35.397 "nvme_iov_md": false 00:18:35.397 }, 00:18:35.397 "memory_domains": [ 00:18:35.397 { 00:18:35.397 "dma_device_id": "system", 00:18:35.397 "dma_device_type": 1 00:18:35.397 }, 00:18:35.397 { 00:18:35.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.397 "dma_device_type": 2 00:18:35.397 }, 00:18:35.397 { 00:18:35.397 "dma_device_id": "system", 00:18:35.397 "dma_device_type": 1 00:18:35.397 }, 00:18:35.397 { 00:18:35.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.397 "dma_device_type": 2 00:18:35.397 }, 00:18:35.397 { 00:18:35.397 "dma_device_id": "system", 00:18:35.397 "dma_device_type": 1 00:18:35.397 }, 00:18:35.397 { 00:18:35.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.397 "dma_device_type": 2 00:18:35.397 }, 00:18:35.397 { 00:18:35.397 "dma_device_id": "system", 00:18:35.397 "dma_device_type": 1 00:18:35.397 }, 00:18:35.397 { 00:18:35.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.397 "dma_device_type": 2 00:18:35.397 } 00:18:35.397 ], 00:18:35.397 "driver_specific": { 00:18:35.397 "raid": { 00:18:35.397 "uuid": "d84a96f4-71d2-4132-99d1-100691eca27d", 00:18:35.397 "strip_size_kb": 64, 00:18:35.397 "state": "online", 00:18:35.397 "raid_level": "raid0", 00:18:35.397 "superblock": true, 00:18:35.397 "num_base_bdevs": 4, 00:18:35.397 "num_base_bdevs_discovered": 4, 00:18:35.397 "num_base_bdevs_operational": 4, 00:18:35.397 "base_bdevs_list": [ 00:18:35.397 { 00:18:35.397 "name": "pt1", 00:18:35.397 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:35.397 "is_configured": true, 00:18:35.397 "data_offset": 2048, 00:18:35.397 "data_size": 63488 00:18:35.397 }, 00:18:35.397 { 00:18:35.397 "name": "pt2", 00:18:35.397 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:35.397 "is_configured": true, 00:18:35.397 "data_offset": 2048, 00:18:35.397 "data_size": 63488 00:18:35.397 }, 00:18:35.397 { 00:18:35.397 "name": "pt3", 00:18:35.397 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:35.397 "is_configured": true, 00:18:35.397 "data_offset": 2048, 00:18:35.397 "data_size": 63488 00:18:35.397 }, 00:18:35.397 { 00:18:35.397 "name": "pt4", 00:18:35.397 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:35.397 "is_configured": true, 00:18:35.397 "data_offset": 2048, 00:18:35.397 "data_size": 63488 00:18:35.397 } 00:18:35.397 ] 00:18:35.397 } 00:18:35.397 } 00:18:35.397 }' 00:18:35.397 18:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:35.655 18:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:35.655 pt2 00:18:35.655 pt3 00:18:35.655 pt4' 00:18:35.655 18:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:35.655 18:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:35.655 18:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:35.913 18:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:35.913 "name": "pt1", 00:18:35.913 "aliases": [ 00:18:35.913 "00000000-0000-0000-0000-000000000001" 00:18:35.913 ], 00:18:35.913 "product_name": "passthru", 00:18:35.913 "block_size": 512, 00:18:35.913 "num_blocks": 65536, 00:18:35.913 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:35.913 "assigned_rate_limits": { 00:18:35.913 "rw_ios_per_sec": 0, 00:18:35.913 "rw_mbytes_per_sec": 0, 00:18:35.913 "r_mbytes_per_sec": 0, 00:18:35.913 "w_mbytes_per_sec": 0 00:18:35.913 }, 00:18:35.913 "claimed": true, 00:18:35.913 "claim_type": "exclusive_write", 00:18:35.913 "zoned": false, 00:18:35.913 "supported_io_types": { 00:18:35.913 "read": true, 00:18:35.913 "write": true, 00:18:35.913 "unmap": true, 00:18:35.913 "flush": true, 00:18:35.913 "reset": true, 00:18:35.913 "nvme_admin": false, 00:18:35.913 "nvme_io": false, 00:18:35.913 "nvme_io_md": false, 00:18:35.913 "write_zeroes": true, 00:18:35.913 "zcopy": true, 00:18:35.914 "get_zone_info": false, 00:18:35.914 "zone_management": false, 00:18:35.914 "zone_append": false, 00:18:35.914 "compare": false, 00:18:35.914 "compare_and_write": false, 00:18:35.914 "abort": true, 00:18:35.914 "seek_hole": false, 00:18:35.914 "seek_data": false, 00:18:35.914 "copy": true, 00:18:35.914 "nvme_iov_md": false 00:18:35.914 }, 00:18:35.914 "memory_domains": [ 00:18:35.914 { 00:18:35.914 "dma_device_id": "system", 00:18:35.914 "dma_device_type": 1 00:18:35.914 }, 00:18:35.914 { 00:18:35.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.914 "dma_device_type": 2 00:18:35.914 } 00:18:35.914 ], 00:18:35.914 "driver_specific": { 00:18:35.914 "passthru": { 00:18:35.914 "name": "pt1", 00:18:35.914 "base_bdev_name": "malloc1" 00:18:35.914 } 00:18:35.914 } 00:18:35.914 }' 00:18:35.914 18:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:35.914 18:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:35.914 18:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:35.914 18:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:35.914 18:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:35.914 18:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:35.914 18:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:35.914 18:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:36.171 18:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:36.171 18:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:36.171 18:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:36.171 18:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:36.171 18:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:36.171 18:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:36.171 18:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:36.429 18:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:36.429 "name": "pt2", 00:18:36.429 "aliases": [ 00:18:36.429 "00000000-0000-0000-0000-000000000002" 00:18:36.429 ], 00:18:36.429 "product_name": "passthru", 00:18:36.429 "block_size": 512, 00:18:36.429 "num_blocks": 65536, 00:18:36.429 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:36.429 "assigned_rate_limits": { 00:18:36.429 "rw_ios_per_sec": 0, 00:18:36.429 "rw_mbytes_per_sec": 0, 00:18:36.429 "r_mbytes_per_sec": 0, 00:18:36.429 "w_mbytes_per_sec": 0 00:18:36.429 }, 00:18:36.429 "claimed": true, 00:18:36.429 "claim_type": "exclusive_write", 00:18:36.429 "zoned": false, 00:18:36.429 "supported_io_types": { 00:18:36.429 "read": true, 00:18:36.429 "write": true, 00:18:36.429 "unmap": true, 00:18:36.429 "flush": true, 00:18:36.429 "reset": true, 00:18:36.429 "nvme_admin": false, 00:18:36.429 "nvme_io": false, 00:18:36.429 "nvme_io_md": false, 00:18:36.429 "write_zeroes": true, 00:18:36.429 "zcopy": true, 00:18:36.429 "get_zone_info": false, 00:18:36.429 "zone_management": false, 00:18:36.429 "zone_append": false, 00:18:36.429 "compare": false, 00:18:36.429 "compare_and_write": false, 00:18:36.429 "abort": true, 00:18:36.429 "seek_hole": false, 00:18:36.429 "seek_data": false, 00:18:36.429 "copy": true, 00:18:36.429 "nvme_iov_md": false 00:18:36.429 }, 00:18:36.429 "memory_domains": [ 00:18:36.429 { 00:18:36.429 "dma_device_id": "system", 00:18:36.429 "dma_device_type": 1 00:18:36.429 }, 00:18:36.429 { 00:18:36.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.429 "dma_device_type": 2 00:18:36.429 } 00:18:36.429 ], 00:18:36.429 "driver_specific": { 00:18:36.429 "passthru": { 00:18:36.429 "name": "pt2", 00:18:36.429 "base_bdev_name": "malloc2" 00:18:36.429 } 00:18:36.429 } 00:18:36.429 }' 00:18:36.429 18:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:36.429 18:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:36.429 18:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:36.429 18:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:36.687 18:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:36.687 18:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:36.687 18:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:36.687 18:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:36.687 18:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:36.687 18:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:36.687 18:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:36.687 18:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:36.687 18:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:36.687 18:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:36.687 18:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:36.945 18:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:36.945 "name": "pt3", 00:18:36.945 "aliases": [ 00:18:36.945 "00000000-0000-0000-0000-000000000003" 00:18:36.945 ], 00:18:36.945 "product_name": "passthru", 00:18:36.945 "block_size": 512, 00:18:36.945 "num_blocks": 65536, 00:18:36.945 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:36.945 "assigned_rate_limits": { 00:18:36.945 "rw_ios_per_sec": 0, 00:18:36.945 "rw_mbytes_per_sec": 0, 00:18:36.945 "r_mbytes_per_sec": 0, 00:18:36.945 "w_mbytes_per_sec": 0 00:18:36.945 }, 00:18:36.945 "claimed": true, 00:18:36.945 "claim_type": "exclusive_write", 00:18:36.945 "zoned": false, 00:18:36.945 "supported_io_types": { 00:18:36.945 "read": true, 00:18:36.945 "write": true, 00:18:36.945 "unmap": true, 00:18:36.945 "flush": true, 00:18:36.945 "reset": true, 00:18:36.945 "nvme_admin": false, 00:18:36.945 "nvme_io": false, 00:18:36.945 "nvme_io_md": false, 00:18:36.945 "write_zeroes": true, 00:18:36.945 "zcopy": true, 00:18:36.945 "get_zone_info": false, 00:18:36.945 "zone_management": false, 00:18:36.945 "zone_append": false, 00:18:36.945 "compare": false, 00:18:36.945 "compare_and_write": false, 00:18:36.945 "abort": true, 00:18:36.945 "seek_hole": false, 00:18:36.945 "seek_data": false, 00:18:36.945 "copy": true, 00:18:36.945 "nvme_iov_md": false 00:18:36.945 }, 00:18:36.945 "memory_domains": [ 00:18:36.945 { 00:18:36.945 "dma_device_id": "system", 00:18:36.945 "dma_device_type": 1 00:18:36.945 }, 00:18:36.945 { 00:18:36.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.945 "dma_device_type": 2 00:18:36.945 } 00:18:36.945 ], 00:18:36.945 "driver_specific": { 00:18:36.945 "passthru": { 00:18:36.945 "name": "pt3", 00:18:36.945 "base_bdev_name": "malloc3" 00:18:36.945 } 00:18:36.945 } 00:18:36.945 }' 00:18:36.945 18:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:37.245 18:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:37.245 18:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:37.245 18:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:37.245 18:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:37.245 18:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:37.245 18:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:37.245 18:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:37.245 18:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:37.245 18:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:37.530 18:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:37.530 18:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:37.530 18:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:37.530 18:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:18:37.530 18:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:37.788 18:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:37.788 "name": "pt4", 00:18:37.788 "aliases": [ 00:18:37.788 "00000000-0000-0000-0000-000000000004" 00:18:37.788 ], 00:18:37.788 "product_name": "passthru", 00:18:37.788 "block_size": 512, 00:18:37.788 "num_blocks": 65536, 00:18:37.788 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:37.788 "assigned_rate_limits": { 00:18:37.788 "rw_ios_per_sec": 0, 00:18:37.788 "rw_mbytes_per_sec": 0, 00:18:37.788 "r_mbytes_per_sec": 0, 00:18:37.788 "w_mbytes_per_sec": 0 00:18:37.788 }, 00:18:37.788 "claimed": true, 00:18:37.788 "claim_type": "exclusive_write", 00:18:37.788 "zoned": false, 00:18:37.788 "supported_io_types": { 00:18:37.788 "read": true, 00:18:37.788 "write": true, 00:18:37.788 "unmap": true, 00:18:37.788 "flush": true, 00:18:37.788 "reset": true, 00:18:37.788 "nvme_admin": false, 00:18:37.788 "nvme_io": false, 00:18:37.788 "nvme_io_md": false, 00:18:37.788 "write_zeroes": true, 00:18:37.788 "zcopy": true, 00:18:37.788 "get_zone_info": false, 00:18:37.788 "zone_management": false, 00:18:37.788 "zone_append": false, 00:18:37.788 "compare": false, 00:18:37.788 "compare_and_write": false, 00:18:37.788 "abort": true, 00:18:37.788 "seek_hole": false, 00:18:37.788 "seek_data": false, 00:18:37.788 "copy": true, 00:18:37.788 "nvme_iov_md": false 00:18:37.788 }, 00:18:37.788 "memory_domains": [ 00:18:37.788 { 00:18:37.788 "dma_device_id": "system", 00:18:37.788 "dma_device_type": 1 00:18:37.788 }, 00:18:37.788 { 00:18:37.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.788 "dma_device_type": 2 00:18:37.788 } 00:18:37.788 ], 00:18:37.788 "driver_specific": { 00:18:37.788 "passthru": { 00:18:37.788 "name": "pt4", 00:18:37.788 "base_bdev_name": "malloc4" 00:18:37.788 } 00:18:37.788 } 00:18:37.788 }' 00:18:37.788 18:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:37.788 18:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:37.788 18:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:37.788 18:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:37.788 18:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:37.788 18:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:37.788 18:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:37.788 18:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:38.045 18:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:38.045 18:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:38.045 18:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:38.045 18:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:38.045 18:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:38.045 18:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:18:38.303 [2024-07-15 18:33:23.610001] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:38.303 18:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' d84a96f4-71d2-4132-99d1-100691eca27d '!=' d84a96f4-71d2-4132-99d1-100691eca27d ']' 00:18:38.303 18:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:18:38.303 18:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:38.303 18:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:38.303 18:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2843968 00:18:38.303 18:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2843968 ']' 00:18:38.303 18:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2843968 00:18:38.303 18:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:18:38.303 18:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:38.303 18:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2843968 00:18:38.303 18:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:38.303 18:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:38.303 18:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2843968' 00:18:38.303 killing process with pid 2843968 00:18:38.303 18:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2843968 00:18:38.303 [2024-07-15 18:33:23.682364] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:38.303 [2024-07-15 18:33:23.682421] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:38.303 [2024-07-15 18:33:23.682482] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:38.303 [2024-07-15 18:33:23.682490] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ad8390 name raid_bdev1, state offline 00:18:38.303 18:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2843968 00:18:38.303 [2024-07-15 18:33:23.718319] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:38.560 18:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:18:38.560 00:18:38.560 real 0m17.697s 00:18:38.560 user 0m32.937s 00:18:38.560 sys 0m2.343s 00:18:38.561 18:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:38.561 18:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.561 ************************************ 00:18:38.561 END TEST raid_superblock_test 00:18:38.561 ************************************ 00:18:38.561 18:33:23 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:38.561 18:33:23 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:18:38.561 18:33:23 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:38.561 18:33:23 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:38.561 18:33:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:38.561 ************************************ 00:18:38.561 START TEST raid_read_error_test 00:18:38.561 ************************************ 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 read 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.fWLVzKZd9Q 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2847302 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2847302 /var/tmp/spdk-raid.sock 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2847302 ']' 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:38.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:38.561 18:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.561 [2024-07-15 18:33:24.029882] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:18:38.561 [2024-07-15 18:33:24.029941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2847302 ] 00:18:38.819 [2024-07-15 18:33:24.130166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.819 [2024-07-15 18:33:24.227654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.819 [2024-07-15 18:33:24.284959] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:38.819 [2024-07-15 18:33:24.285004] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:39.752 18:33:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:39.752 18:33:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:18:39.752 18:33:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:39.752 18:33:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:39.752 BaseBdev1_malloc 00:18:39.752 18:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:40.010 true 00:18:40.010 18:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:40.268 [2024-07-15 18:33:25.751109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:40.268 [2024-07-15 18:33:25.751151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.268 [2024-07-15 18:33:25.751168] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2092d20 00:18:40.268 [2024-07-15 18:33:25.751177] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.268 [2024-07-15 18:33:25.752855] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.269 [2024-07-15 18:33:25.752882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:40.269 BaseBdev1 00:18:40.269 18:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:40.269 18:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:40.527 BaseBdev2_malloc 00:18:40.527 18:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:40.786 true 00:18:40.786 18:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:41.044 [2024-07-15 18:33:26.537506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:41.044 [2024-07-15 18:33:26.537542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.044 [2024-07-15 18:33:26.537558] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2097d50 00:18:41.044 [2024-07-15 18:33:26.537567] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.044 [2024-07-15 18:33:26.539032] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.044 [2024-07-15 18:33:26.539057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:41.044 BaseBdev2 00:18:41.044 18:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:41.044 18:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:41.303 BaseBdev3_malloc 00:18:41.303 18:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:41.562 true 00:18:41.562 18:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:41.820 [2024-07-15 18:33:27.328036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:41.820 [2024-07-15 18:33:27.328071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.821 [2024-07-15 18:33:27.328091] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2096ef0 00:18:41.821 [2024-07-15 18:33:27.328101] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.821 [2024-07-15 18:33:27.329573] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.821 [2024-07-15 18:33:27.329600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:41.821 BaseBdev3 00:18:41.821 18:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:41.821 18:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:42.079 BaseBdev4_malloc 00:18:42.079 18:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:18:42.337 true 00:18:42.337 18:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:18:42.596 [2024-07-15 18:33:28.114445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:18:42.596 [2024-07-15 18:33:28.114483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.596 [2024-07-15 18:33:28.114500] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x209b280 00:18:42.596 [2024-07-15 18:33:28.114510] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.596 [2024-07-15 18:33:28.115966] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.596 [2024-07-15 18:33:28.115991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:42.596 BaseBdev4 00:18:42.596 18:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:18:43.164 [2024-07-15 18:33:28.443487] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:43.164 [2024-07-15 18:33:28.444883] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:43.164 [2024-07-15 18:33:28.444959] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:43.164 [2024-07-15 18:33:28.445020] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:43.164 [2024-07-15 18:33:28.445262] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x209cd90 00:18:43.164 [2024-07-15 18:33:28.445273] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:43.164 [2024-07-15 18:33:28.445476] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x209a8d0 00:18:43.164 [2024-07-15 18:33:28.445635] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x209cd90 00:18:43.164 [2024-07-15 18:33:28.445644] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x209cd90 00:18:43.164 [2024-07-15 18:33:28.445749] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.164 18:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:43.164 18:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:43.164 18:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:43.164 18:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:43.164 18:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:43.164 18:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:43.164 18:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:43.164 18:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:43.164 18:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:43.164 18:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:43.164 18:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.164 18:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.164 18:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:43.164 "name": "raid_bdev1", 00:18:43.164 "uuid": "02d34db2-911f-4236-8c3b-d83433eb7f24", 00:18:43.164 "strip_size_kb": 64, 00:18:43.164 "state": "online", 00:18:43.164 "raid_level": "raid0", 00:18:43.164 "superblock": true, 00:18:43.164 "num_base_bdevs": 4, 00:18:43.164 "num_base_bdevs_discovered": 4, 00:18:43.164 "num_base_bdevs_operational": 4, 00:18:43.164 "base_bdevs_list": [ 00:18:43.164 { 00:18:43.164 "name": "BaseBdev1", 00:18:43.164 "uuid": "e209a6a8-5544-5de5-b2dd-69032704d298", 00:18:43.164 "is_configured": true, 00:18:43.164 "data_offset": 2048, 00:18:43.164 "data_size": 63488 00:18:43.164 }, 00:18:43.164 { 00:18:43.164 "name": "BaseBdev2", 00:18:43.164 "uuid": "13869eca-6e53-568a-a5ef-056ba0ed1ec5", 00:18:43.164 "is_configured": true, 00:18:43.164 "data_offset": 2048, 00:18:43.164 "data_size": 63488 00:18:43.164 }, 00:18:43.164 { 00:18:43.164 "name": "BaseBdev3", 00:18:43.164 "uuid": "3b4c9d6d-b37c-5da8-9cc4-4ebf881c8cb6", 00:18:43.164 "is_configured": true, 00:18:43.164 "data_offset": 2048, 00:18:43.164 "data_size": 63488 00:18:43.164 }, 00:18:43.164 { 00:18:43.164 "name": "BaseBdev4", 00:18:43.164 "uuid": "284db546-b14f-5601-b7d2-cb9eb9ee4ae4", 00:18:43.164 "is_configured": true, 00:18:43.164 "data_offset": 2048, 00:18:43.164 "data_size": 63488 00:18:43.165 } 00:18:43.165 ] 00:18:43.165 }' 00:18:43.165 18:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:43.165 18:33:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.732 18:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:18:43.732 18:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:43.990 [2024-07-15 18:33:29.338152] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20a01c0 00:18:44.935 18:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:44.935 18:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:44.935 18:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:18:44.935 18:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:18:44.935 18:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:44.935 18:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:44.935 18:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:44.935 18:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:44.935 18:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:44.935 18:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:44.935 18:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:44.935 18:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:44.935 18:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:44.935 18:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:44.935 18:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.935 18:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.194 18:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:45.194 "name": "raid_bdev1", 00:18:45.194 "uuid": "02d34db2-911f-4236-8c3b-d83433eb7f24", 00:18:45.194 "strip_size_kb": 64, 00:18:45.194 "state": "online", 00:18:45.194 "raid_level": "raid0", 00:18:45.194 "superblock": true, 00:18:45.194 "num_base_bdevs": 4, 00:18:45.194 "num_base_bdevs_discovered": 4, 00:18:45.194 "num_base_bdevs_operational": 4, 00:18:45.194 "base_bdevs_list": [ 00:18:45.194 { 00:18:45.194 "name": "BaseBdev1", 00:18:45.194 "uuid": "e209a6a8-5544-5de5-b2dd-69032704d298", 00:18:45.194 "is_configured": true, 00:18:45.194 "data_offset": 2048, 00:18:45.194 "data_size": 63488 00:18:45.194 }, 00:18:45.194 { 00:18:45.194 "name": "BaseBdev2", 00:18:45.194 "uuid": "13869eca-6e53-568a-a5ef-056ba0ed1ec5", 00:18:45.194 "is_configured": true, 00:18:45.194 "data_offset": 2048, 00:18:45.194 "data_size": 63488 00:18:45.194 }, 00:18:45.194 { 00:18:45.194 "name": "BaseBdev3", 00:18:45.194 "uuid": "3b4c9d6d-b37c-5da8-9cc4-4ebf881c8cb6", 00:18:45.194 "is_configured": true, 00:18:45.194 "data_offset": 2048, 00:18:45.194 "data_size": 63488 00:18:45.194 }, 00:18:45.194 { 00:18:45.194 "name": "BaseBdev4", 00:18:45.194 "uuid": "284db546-b14f-5601-b7d2-cb9eb9ee4ae4", 00:18:45.194 "is_configured": true, 00:18:45.194 "data_offset": 2048, 00:18:45.194 "data_size": 63488 00:18:45.194 } 00:18:45.194 ] 00:18:45.194 }' 00:18:45.194 18:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:45.194 18:33:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.130 18:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:46.130 [2024-07-15 18:33:31.572930] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:46.130 [2024-07-15 18:33:31.572973] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:46.130 [2024-07-15 18:33:31.576381] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:46.130 [2024-07-15 18:33:31.576420] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.130 [2024-07-15 18:33:31.576460] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:46.130 [2024-07-15 18:33:31.576468] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x209cd90 name raid_bdev1, state offline 00:18:46.130 0 00:18:46.131 18:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2847302 00:18:46.131 18:33:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2847302 ']' 00:18:46.131 18:33:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2847302 00:18:46.131 18:33:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:18:46.131 18:33:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:46.131 18:33:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2847302 00:18:46.131 18:33:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:46.131 18:33:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:46.131 18:33:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2847302' 00:18:46.131 killing process with pid 2847302 00:18:46.131 18:33:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2847302 00:18:46.131 [2024-07-15 18:33:31.651285] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:46.131 18:33:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2847302 00:18:46.131 [2024-07-15 18:33:31.680853] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:46.389 18:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.fWLVzKZd9Q 00:18:46.389 18:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:46.389 18:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:46.389 18:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:18:46.389 18:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:18:46.389 18:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:46.389 18:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:46.389 18:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:18:46.389 00:18:46.389 real 0m7.933s 00:18:46.389 user 0m13.073s 00:18:46.389 sys 0m1.079s 00:18:46.389 18:33:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:46.389 18:33:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.389 ************************************ 00:18:46.389 END TEST raid_read_error_test 00:18:46.389 ************************************ 00:18:46.389 18:33:31 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:46.389 18:33:31 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:18:46.389 18:33:31 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:46.389 18:33:31 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:46.389 18:33:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:46.389 ************************************ 00:18:46.389 START TEST raid_write_error_test 00:18:46.389 ************************************ 00:18:46.389 18:33:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 write 00:18:46.389 18:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:18:46.389 18:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:18:46.389 18:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:18:46.389 18:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:18:46.389 18:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:46.389 18:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:18:46.389 18:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:46.389 18:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:46.390 18:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:18:46.390 18:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:46.390 18:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:46.390 18:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:18:46.390 18:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:46.390 18:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:46.648 18:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:18:46.648 18:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:46.648 18:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:46.648 18:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:46.648 18:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:18:46.648 18:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:18:46.649 18:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:18:46.649 18:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:18:46.649 18:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:18:46.649 18:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:18:46.649 18:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:18:46.649 18:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:18:46.649 18:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:18:46.649 18:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:18:46.649 18:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.H3ftDpBvee 00:18:46.649 18:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2848629 00:18:46.649 18:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2848629 /var/tmp/spdk-raid.sock 00:18:46.649 18:33:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:46.649 18:33:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2848629 ']' 00:18:46.649 18:33:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:46.649 18:33:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:46.649 18:33:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:46.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:46.649 18:33:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:46.649 18:33:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.649 [2024-07-15 18:33:32.009657] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:18:46.649 [2024-07-15 18:33:32.009723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2848629 ] 00:18:46.649 [2024-07-15 18:33:32.110686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.908 [2024-07-15 18:33:32.202012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.908 [2024-07-15 18:33:32.269192] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:46.908 [2024-07-15 18:33:32.269229] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:47.474 18:33:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:47.474 18:33:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:18:47.474 18:33:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:47.474 18:33:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:47.733 BaseBdev1_malloc 00:18:47.733 18:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:47.991 true 00:18:47.991 18:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:48.249 [2024-07-15 18:33:33.720904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:48.249 [2024-07-15 18:33:33.720947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.249 [2024-07-15 18:33:33.720971] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13b0d20 00:18:48.249 [2024-07-15 18:33:33.720980] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.249 [2024-07-15 18:33:33.722672] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.249 [2024-07-15 18:33:33.722701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:48.249 BaseBdev1 00:18:48.249 18:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:48.249 18:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:48.527 BaseBdev2_malloc 00:18:48.527 18:33:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:48.786 true 00:18:48.786 18:33:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:49.045 [2024-07-15 18:33:34.507456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:49.045 [2024-07-15 18:33:34.507494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.045 [2024-07-15 18:33:34.507509] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13b5d50 00:18:49.045 [2024-07-15 18:33:34.507524] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.045 [2024-07-15 18:33:34.508987] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.045 [2024-07-15 18:33:34.509013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:49.045 BaseBdev2 00:18:49.045 18:33:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:49.045 18:33:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:49.304 BaseBdev3_malloc 00:18:49.304 18:33:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:49.562 true 00:18:49.562 18:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:49.821 [2024-07-15 18:33:35.289801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:49.821 [2024-07-15 18:33:35.289840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.821 [2024-07-15 18:33:35.289856] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13b4ef0 00:18:49.821 [2024-07-15 18:33:35.289866] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.821 [2024-07-15 18:33:35.291346] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.821 [2024-07-15 18:33:35.291373] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:49.821 BaseBdev3 00:18:49.821 18:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:49.821 18:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:50.080 BaseBdev4_malloc 00:18:50.080 18:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:18:50.339 true 00:18:50.339 18:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:18:50.596 [2024-07-15 18:33:36.076404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:18:50.597 [2024-07-15 18:33:36.076444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.597 [2024-07-15 18:33:36.076460] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13b9280 00:18:50.597 [2024-07-15 18:33:36.076470] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.597 [2024-07-15 18:33:36.077933] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.597 [2024-07-15 18:33:36.077965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:50.597 BaseBdev4 00:18:50.597 18:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:18:50.856 [2024-07-15 18:33:36.329108] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:50.856 [2024-07-15 18:33:36.330366] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:50.856 [2024-07-15 18:33:36.330433] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:50.856 [2024-07-15 18:33:36.330493] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:50.856 [2024-07-15 18:33:36.330727] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x13bad90 00:18:50.856 [2024-07-15 18:33:36.330738] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:50.856 [2024-07-15 18:33:36.330923] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x13b88d0 00:18:50.856 [2024-07-15 18:33:36.331087] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x13bad90 00:18:50.856 [2024-07-15 18:33:36.331096] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x13bad90 00:18:50.856 [2024-07-15 18:33:36.331197] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:50.856 18:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:50.856 18:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:50.856 18:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:50.856 18:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:50.856 18:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:50.856 18:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:50.856 18:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:50.856 18:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:50.856 18:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:50.856 18:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:50.856 18:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.856 18:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.115 18:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:51.115 "name": "raid_bdev1", 00:18:51.115 "uuid": "e654dc6d-1b44-47a3-8d2e-546b7b4c4abe", 00:18:51.115 "strip_size_kb": 64, 00:18:51.115 "state": "online", 00:18:51.115 "raid_level": "raid0", 00:18:51.115 "superblock": true, 00:18:51.115 "num_base_bdevs": 4, 00:18:51.115 "num_base_bdevs_discovered": 4, 00:18:51.115 "num_base_bdevs_operational": 4, 00:18:51.115 "base_bdevs_list": [ 00:18:51.115 { 00:18:51.115 "name": "BaseBdev1", 00:18:51.115 "uuid": "8967184c-fe4c-500e-85ac-484cadbd9759", 00:18:51.115 "is_configured": true, 00:18:51.115 "data_offset": 2048, 00:18:51.115 "data_size": 63488 00:18:51.115 }, 00:18:51.115 { 00:18:51.115 "name": "BaseBdev2", 00:18:51.115 "uuid": "65c1a23b-f0d2-55bd-aeb2-ed5e606d39f5", 00:18:51.115 "is_configured": true, 00:18:51.115 "data_offset": 2048, 00:18:51.115 "data_size": 63488 00:18:51.115 }, 00:18:51.115 { 00:18:51.115 "name": "BaseBdev3", 00:18:51.115 "uuid": "df9853cc-aacf-5f8e-8237-449dbe4655fd", 00:18:51.115 "is_configured": true, 00:18:51.115 "data_offset": 2048, 00:18:51.115 "data_size": 63488 00:18:51.115 }, 00:18:51.115 { 00:18:51.115 "name": "BaseBdev4", 00:18:51.115 "uuid": "5be0cabf-3e6f-527d-882d-c31a7277c0de", 00:18:51.115 "is_configured": true, 00:18:51.115 "data_offset": 2048, 00:18:51.115 "data_size": 63488 00:18:51.115 } 00:18:51.115 ] 00:18:51.115 }' 00:18:51.115 18:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:51.115 18:33:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.052 18:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:18:52.052 18:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:52.052 [2024-07-15 18:33:37.412368] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x13be1c0 00:18:52.988 18:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:53.247 18:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:53.247 18:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:18:53.247 18:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:18:53.247 18:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:53.247 18:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:53.247 18:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:53.247 18:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:53.247 18:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:53.247 18:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:53.248 18:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:53.248 18:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:53.248 18:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:53.248 18:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:53.248 18:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.248 18:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.507 18:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:53.507 "name": "raid_bdev1", 00:18:53.507 "uuid": "e654dc6d-1b44-47a3-8d2e-546b7b4c4abe", 00:18:53.507 "strip_size_kb": 64, 00:18:53.507 "state": "online", 00:18:53.507 "raid_level": "raid0", 00:18:53.507 "superblock": true, 00:18:53.507 "num_base_bdevs": 4, 00:18:53.507 "num_base_bdevs_discovered": 4, 00:18:53.507 "num_base_bdevs_operational": 4, 00:18:53.507 "base_bdevs_list": [ 00:18:53.507 { 00:18:53.507 "name": "BaseBdev1", 00:18:53.507 "uuid": "8967184c-fe4c-500e-85ac-484cadbd9759", 00:18:53.507 "is_configured": true, 00:18:53.507 "data_offset": 2048, 00:18:53.507 "data_size": 63488 00:18:53.507 }, 00:18:53.507 { 00:18:53.507 "name": "BaseBdev2", 00:18:53.507 "uuid": "65c1a23b-f0d2-55bd-aeb2-ed5e606d39f5", 00:18:53.507 "is_configured": true, 00:18:53.507 "data_offset": 2048, 00:18:53.507 "data_size": 63488 00:18:53.507 }, 00:18:53.507 { 00:18:53.507 "name": "BaseBdev3", 00:18:53.507 "uuid": "df9853cc-aacf-5f8e-8237-449dbe4655fd", 00:18:53.507 "is_configured": true, 00:18:53.507 "data_offset": 2048, 00:18:53.507 "data_size": 63488 00:18:53.507 }, 00:18:53.507 { 00:18:53.507 "name": "BaseBdev4", 00:18:53.507 "uuid": "5be0cabf-3e6f-527d-882d-c31a7277c0de", 00:18:53.507 "is_configured": true, 00:18:53.507 "data_offset": 2048, 00:18:53.507 "data_size": 63488 00:18:53.507 } 00:18:53.507 ] 00:18:53.507 }' 00:18:53.507 18:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:53.507 18:33:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.075 18:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:54.705 [2024-07-15 18:33:40.024705] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:54.705 [2024-07-15 18:33:40.024739] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:54.705 [2024-07-15 18:33:40.028155] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:54.705 [2024-07-15 18:33:40.028195] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:54.705 [2024-07-15 18:33:40.028235] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:54.705 [2024-07-15 18:33:40.028243] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x13bad90 name raid_bdev1, state offline 00:18:54.705 0 00:18:54.705 18:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2848629 00:18:54.705 18:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2848629 ']' 00:18:54.705 18:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2848629 00:18:54.705 18:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:18:54.705 18:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:54.705 18:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2848629 00:18:54.705 18:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:54.705 18:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:54.705 18:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2848629' 00:18:54.705 killing process with pid 2848629 00:18:54.705 18:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2848629 00:18:54.705 [2024-07-15 18:33:40.114848] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:54.705 18:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2848629 00:18:54.705 [2024-07-15 18:33:40.144591] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:54.965 18:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.H3ftDpBvee 00:18:54.965 18:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:54.965 18:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:54.965 18:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.38 00:18:54.965 18:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:18:54.965 18:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:54.965 18:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:54.965 18:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.38 != \0\.\0\0 ]] 00:18:54.965 00:18:54.965 real 0m8.421s 00:18:54.965 user 0m14.009s 00:18:54.965 sys 0m1.162s 00:18:54.965 18:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:54.965 18:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.965 ************************************ 00:18:54.965 END TEST raid_write_error_test 00:18:54.965 ************************************ 00:18:54.965 18:33:40 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:54.965 18:33:40 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:18:54.965 18:33:40 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:18:54.965 18:33:40 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:54.965 18:33:40 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:54.965 18:33:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:54.965 ************************************ 00:18:54.965 START TEST raid_state_function_test 00:18:54.965 ************************************ 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 false 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2850010 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2850010' 00:18:54.965 Process raid pid: 2850010 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2850010 /var/tmp/spdk-raid.sock 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2850010 ']' 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:54.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:54.965 18:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.965 [2024-07-15 18:33:40.460088] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:18:54.965 [2024-07-15 18:33:40.460135] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.224 [2024-07-15 18:33:40.545552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.224 [2024-07-15 18:33:40.636276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.224 [2024-07-15 18:33:40.700568] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:55.224 [2024-07-15 18:33:40.700602] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:55.224 18:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:55.224 18:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:18:55.224 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:55.483 [2024-07-15 18:33:40.914346] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:55.483 [2024-07-15 18:33:40.914386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:55.483 [2024-07-15 18:33:40.914395] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:55.483 [2024-07-15 18:33:40.914403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:55.483 [2024-07-15 18:33:40.914410] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:55.483 [2024-07-15 18:33:40.914423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:55.483 [2024-07-15 18:33:40.914429] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:55.483 [2024-07-15 18:33:40.914437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:55.483 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:55.483 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:55.483 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:55.483 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:55.483 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:55.483 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:55.483 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:55.483 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:55.483 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:55.483 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:55.483 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.483 18:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.742 18:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:55.742 "name": "Existed_Raid", 00:18:55.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.742 "strip_size_kb": 64, 00:18:55.742 "state": "configuring", 00:18:55.742 "raid_level": "concat", 00:18:55.742 "superblock": false, 00:18:55.742 "num_base_bdevs": 4, 00:18:55.742 "num_base_bdevs_discovered": 0, 00:18:55.742 "num_base_bdevs_operational": 4, 00:18:55.742 "base_bdevs_list": [ 00:18:55.742 { 00:18:55.742 "name": "BaseBdev1", 00:18:55.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.742 "is_configured": false, 00:18:55.742 "data_offset": 0, 00:18:55.742 "data_size": 0 00:18:55.742 }, 00:18:55.742 { 00:18:55.742 "name": "BaseBdev2", 00:18:55.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.742 "is_configured": false, 00:18:55.742 "data_offset": 0, 00:18:55.742 "data_size": 0 00:18:55.742 }, 00:18:55.742 { 00:18:55.742 "name": "BaseBdev3", 00:18:55.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.742 "is_configured": false, 00:18:55.742 "data_offset": 0, 00:18:55.742 "data_size": 0 00:18:55.742 }, 00:18:55.742 { 00:18:55.742 "name": "BaseBdev4", 00:18:55.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.742 "is_configured": false, 00:18:55.742 "data_offset": 0, 00:18:55.742 "data_size": 0 00:18:55.742 } 00:18:55.742 ] 00:18:55.742 }' 00:18:55.742 18:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:55.742 18:33:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.678 18:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:56.937 [2024-07-15 18:33:42.366060] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:56.937 [2024-07-15 18:33:42.366093] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x215fbc0 name Existed_Raid, state configuring 00:18:56.937 18:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:57.197 [2024-07-15 18:33:42.682933] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:57.197 [2024-07-15 18:33:42.682969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:57.197 [2024-07-15 18:33:42.682977] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:57.197 [2024-07-15 18:33:42.682985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:57.197 [2024-07-15 18:33:42.683001] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:57.197 [2024-07-15 18:33:42.683009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:57.197 [2024-07-15 18:33:42.683015] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:57.197 [2024-07-15 18:33:42.683023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:57.197 18:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:57.456 [2024-07-15 18:33:42.949038] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:57.456 BaseBdev1 00:18:57.456 18:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:57.456 18:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:57.456 18:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:57.456 18:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:57.456 18:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:57.456 18:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:57.456 18:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:58.023 18:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:58.023 [ 00:18:58.023 { 00:18:58.023 "name": "BaseBdev1", 00:18:58.023 "aliases": [ 00:18:58.023 "c914ad8d-0de7-4f07-b6be-a474d2eef61f" 00:18:58.023 ], 00:18:58.023 "product_name": "Malloc disk", 00:18:58.023 "block_size": 512, 00:18:58.023 "num_blocks": 65536, 00:18:58.023 "uuid": "c914ad8d-0de7-4f07-b6be-a474d2eef61f", 00:18:58.023 "assigned_rate_limits": { 00:18:58.023 "rw_ios_per_sec": 0, 00:18:58.023 "rw_mbytes_per_sec": 0, 00:18:58.023 "r_mbytes_per_sec": 0, 00:18:58.023 "w_mbytes_per_sec": 0 00:18:58.023 }, 00:18:58.023 "claimed": true, 00:18:58.023 "claim_type": "exclusive_write", 00:18:58.023 "zoned": false, 00:18:58.023 "supported_io_types": { 00:18:58.023 "read": true, 00:18:58.023 "write": true, 00:18:58.023 "unmap": true, 00:18:58.023 "flush": true, 00:18:58.023 "reset": true, 00:18:58.023 "nvme_admin": false, 00:18:58.023 "nvme_io": false, 00:18:58.023 "nvme_io_md": false, 00:18:58.023 "write_zeroes": true, 00:18:58.023 "zcopy": true, 00:18:58.023 "get_zone_info": false, 00:18:58.023 "zone_management": false, 00:18:58.023 "zone_append": false, 00:18:58.023 "compare": false, 00:18:58.023 "compare_and_write": false, 00:18:58.023 "abort": true, 00:18:58.023 "seek_hole": false, 00:18:58.023 "seek_data": false, 00:18:58.023 "copy": true, 00:18:58.023 "nvme_iov_md": false 00:18:58.023 }, 00:18:58.023 "memory_domains": [ 00:18:58.023 { 00:18:58.023 "dma_device_id": "system", 00:18:58.023 "dma_device_type": 1 00:18:58.023 }, 00:18:58.023 { 00:18:58.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.023 "dma_device_type": 2 00:18:58.023 } 00:18:58.023 ], 00:18:58.023 "driver_specific": {} 00:18:58.023 } 00:18:58.023 ] 00:18:58.023 18:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:58.023 18:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:58.023 18:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:58.023 18:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:58.023 18:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:58.023 18:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:58.023 18:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:58.023 18:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:58.023 18:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:58.023 18:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:58.023 18:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:58.023 18:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.023 18:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.590 18:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:58.590 "name": "Existed_Raid", 00:18:58.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.590 "strip_size_kb": 64, 00:18:58.590 "state": "configuring", 00:18:58.590 "raid_level": "concat", 00:18:58.590 "superblock": false, 00:18:58.590 "num_base_bdevs": 4, 00:18:58.590 "num_base_bdevs_discovered": 1, 00:18:58.590 "num_base_bdevs_operational": 4, 00:18:58.590 "base_bdevs_list": [ 00:18:58.590 { 00:18:58.590 "name": "BaseBdev1", 00:18:58.590 "uuid": "c914ad8d-0de7-4f07-b6be-a474d2eef61f", 00:18:58.590 "is_configured": true, 00:18:58.590 "data_offset": 0, 00:18:58.590 "data_size": 65536 00:18:58.590 }, 00:18:58.590 { 00:18:58.590 "name": "BaseBdev2", 00:18:58.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.590 "is_configured": false, 00:18:58.590 "data_offset": 0, 00:18:58.590 "data_size": 0 00:18:58.590 }, 00:18:58.590 { 00:18:58.590 "name": "BaseBdev3", 00:18:58.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.590 "is_configured": false, 00:18:58.590 "data_offset": 0, 00:18:58.590 "data_size": 0 00:18:58.590 }, 00:18:58.590 { 00:18:58.590 "name": "BaseBdev4", 00:18:58.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.590 "is_configured": false, 00:18:58.590 "data_offset": 0, 00:18:58.590 "data_size": 0 00:18:58.590 } 00:18:58.590 ] 00:18:58.590 }' 00:18:58.590 18:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:58.590 18:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.525 18:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:59.827 [2024-07-15 18:33:45.150984] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:59.828 [2024-07-15 18:33:45.151028] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x215f430 name Existed_Raid, state configuring 00:18:59.828 18:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:00.086 [2024-07-15 18:33:45.407717] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:00.086 [2024-07-15 18:33:45.409253] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:00.086 [2024-07-15 18:33:45.409284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:00.086 [2024-07-15 18:33:45.409293] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:00.086 [2024-07-15 18:33:45.409301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:00.086 [2024-07-15 18:33:45.409308] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:00.086 [2024-07-15 18:33:45.409316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:00.086 18:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:00.086 18:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:00.086 18:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:00.086 18:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:00.086 18:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:00.086 18:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:00.086 18:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:00.086 18:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:00.086 18:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:00.086 18:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:00.086 18:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:00.086 18:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:00.086 18:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.086 18:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.345 18:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:00.345 "name": "Existed_Raid", 00:19:00.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.345 "strip_size_kb": 64, 00:19:00.345 "state": "configuring", 00:19:00.345 "raid_level": "concat", 00:19:00.345 "superblock": false, 00:19:00.345 "num_base_bdevs": 4, 00:19:00.345 "num_base_bdevs_discovered": 1, 00:19:00.345 "num_base_bdevs_operational": 4, 00:19:00.345 "base_bdevs_list": [ 00:19:00.345 { 00:19:00.345 "name": "BaseBdev1", 00:19:00.345 "uuid": "c914ad8d-0de7-4f07-b6be-a474d2eef61f", 00:19:00.345 "is_configured": true, 00:19:00.345 "data_offset": 0, 00:19:00.345 "data_size": 65536 00:19:00.345 }, 00:19:00.345 { 00:19:00.345 "name": "BaseBdev2", 00:19:00.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.345 "is_configured": false, 00:19:00.345 "data_offset": 0, 00:19:00.345 "data_size": 0 00:19:00.345 }, 00:19:00.345 { 00:19:00.345 "name": "BaseBdev3", 00:19:00.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.345 "is_configured": false, 00:19:00.345 "data_offset": 0, 00:19:00.345 "data_size": 0 00:19:00.345 }, 00:19:00.345 { 00:19:00.345 "name": "BaseBdev4", 00:19:00.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.345 "is_configured": false, 00:19:00.345 "data_offset": 0, 00:19:00.345 "data_size": 0 00:19:00.345 } 00:19:00.345 ] 00:19:00.345 }' 00:19:00.345 18:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:00.345 18:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.912 18:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:01.479 [2024-07-15 18:33:46.919067] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:01.479 BaseBdev2 00:19:01.479 18:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:01.479 18:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:01.479 18:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:01.479 18:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:01.479 18:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:01.479 18:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:01.479 18:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:01.737 18:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:01.995 [ 00:19:01.995 { 00:19:01.995 "name": "BaseBdev2", 00:19:01.995 "aliases": [ 00:19:01.995 "c985659c-6ed6-48a2-a559-9b9cdc248336" 00:19:01.995 ], 00:19:01.995 "product_name": "Malloc disk", 00:19:01.995 "block_size": 512, 00:19:01.995 "num_blocks": 65536, 00:19:01.995 "uuid": "c985659c-6ed6-48a2-a559-9b9cdc248336", 00:19:01.995 "assigned_rate_limits": { 00:19:01.995 "rw_ios_per_sec": 0, 00:19:01.995 "rw_mbytes_per_sec": 0, 00:19:01.995 "r_mbytes_per_sec": 0, 00:19:01.995 "w_mbytes_per_sec": 0 00:19:01.995 }, 00:19:01.995 "claimed": true, 00:19:01.995 "claim_type": "exclusive_write", 00:19:01.995 "zoned": false, 00:19:01.995 "supported_io_types": { 00:19:01.995 "read": true, 00:19:01.995 "write": true, 00:19:01.995 "unmap": true, 00:19:01.995 "flush": true, 00:19:01.995 "reset": true, 00:19:01.995 "nvme_admin": false, 00:19:01.995 "nvme_io": false, 00:19:01.995 "nvme_io_md": false, 00:19:01.995 "write_zeroes": true, 00:19:01.995 "zcopy": true, 00:19:01.995 "get_zone_info": false, 00:19:01.995 "zone_management": false, 00:19:01.995 "zone_append": false, 00:19:01.995 "compare": false, 00:19:01.995 "compare_and_write": false, 00:19:01.995 "abort": true, 00:19:01.995 "seek_hole": false, 00:19:01.995 "seek_data": false, 00:19:01.995 "copy": true, 00:19:01.995 "nvme_iov_md": false 00:19:01.995 }, 00:19:01.995 "memory_domains": [ 00:19:01.995 { 00:19:01.995 "dma_device_id": "system", 00:19:01.995 "dma_device_type": 1 00:19:01.995 }, 00:19:01.995 { 00:19:01.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.995 "dma_device_type": 2 00:19:01.995 } 00:19:01.995 ], 00:19:01.995 "driver_specific": {} 00:19:01.996 } 00:19:01.996 ] 00:19:01.996 18:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:01.996 18:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:01.996 18:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:01.996 18:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:01.996 18:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:01.996 18:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:01.996 18:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:01.996 18:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:01.996 18:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:01.996 18:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:01.996 18:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:01.996 18:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:01.996 18:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:01.996 18:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.996 18:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:02.563 18:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:02.563 "name": "Existed_Raid", 00:19:02.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.563 "strip_size_kb": 64, 00:19:02.563 "state": "configuring", 00:19:02.563 "raid_level": "concat", 00:19:02.563 "superblock": false, 00:19:02.563 "num_base_bdevs": 4, 00:19:02.563 "num_base_bdevs_discovered": 2, 00:19:02.563 "num_base_bdevs_operational": 4, 00:19:02.563 "base_bdevs_list": [ 00:19:02.563 { 00:19:02.563 "name": "BaseBdev1", 00:19:02.563 "uuid": "c914ad8d-0de7-4f07-b6be-a474d2eef61f", 00:19:02.563 "is_configured": true, 00:19:02.563 "data_offset": 0, 00:19:02.563 "data_size": 65536 00:19:02.563 }, 00:19:02.563 { 00:19:02.563 "name": "BaseBdev2", 00:19:02.563 "uuid": "c985659c-6ed6-48a2-a559-9b9cdc248336", 00:19:02.563 "is_configured": true, 00:19:02.563 "data_offset": 0, 00:19:02.563 "data_size": 65536 00:19:02.563 }, 00:19:02.563 { 00:19:02.563 "name": "BaseBdev3", 00:19:02.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.563 "is_configured": false, 00:19:02.563 "data_offset": 0, 00:19:02.563 "data_size": 0 00:19:02.563 }, 00:19:02.563 { 00:19:02.563 "name": "BaseBdev4", 00:19:02.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.563 "is_configured": false, 00:19:02.563 "data_offset": 0, 00:19:02.563 "data_size": 0 00:19:02.563 } 00:19:02.563 ] 00:19:02.563 }' 00:19:02.563 18:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:02.563 18:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.500 18:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:03.759 [2024-07-15 18:33:49.116245] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:03.759 BaseBdev3 00:19:03.759 18:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:03.759 18:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:03.759 18:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:03.759 18:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:03.759 18:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:03.759 18:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:03.759 18:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:04.018 18:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:04.277 [ 00:19:04.277 { 00:19:04.277 "name": "BaseBdev3", 00:19:04.277 "aliases": [ 00:19:04.277 "1a9a4b9b-bd27-49cc-bdbd-27c82bcbe066" 00:19:04.277 ], 00:19:04.277 "product_name": "Malloc disk", 00:19:04.277 "block_size": 512, 00:19:04.277 "num_blocks": 65536, 00:19:04.277 "uuid": "1a9a4b9b-bd27-49cc-bdbd-27c82bcbe066", 00:19:04.277 "assigned_rate_limits": { 00:19:04.277 "rw_ios_per_sec": 0, 00:19:04.277 "rw_mbytes_per_sec": 0, 00:19:04.277 "r_mbytes_per_sec": 0, 00:19:04.277 "w_mbytes_per_sec": 0 00:19:04.277 }, 00:19:04.277 "claimed": true, 00:19:04.277 "claim_type": "exclusive_write", 00:19:04.277 "zoned": false, 00:19:04.277 "supported_io_types": { 00:19:04.277 "read": true, 00:19:04.277 "write": true, 00:19:04.277 "unmap": true, 00:19:04.277 "flush": true, 00:19:04.277 "reset": true, 00:19:04.277 "nvme_admin": false, 00:19:04.277 "nvme_io": false, 00:19:04.277 "nvme_io_md": false, 00:19:04.277 "write_zeroes": true, 00:19:04.277 "zcopy": true, 00:19:04.277 "get_zone_info": false, 00:19:04.277 "zone_management": false, 00:19:04.277 "zone_append": false, 00:19:04.277 "compare": false, 00:19:04.277 "compare_and_write": false, 00:19:04.277 "abort": true, 00:19:04.277 "seek_hole": false, 00:19:04.277 "seek_data": false, 00:19:04.277 "copy": true, 00:19:04.277 "nvme_iov_md": false 00:19:04.277 }, 00:19:04.277 "memory_domains": [ 00:19:04.277 { 00:19:04.277 "dma_device_id": "system", 00:19:04.277 "dma_device_type": 1 00:19:04.277 }, 00:19:04.277 { 00:19:04.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.277 "dma_device_type": 2 00:19:04.277 } 00:19:04.277 ], 00:19:04.277 "driver_specific": {} 00:19:04.277 } 00:19:04.277 ] 00:19:04.277 18:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:04.277 18:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:04.277 18:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:04.277 18:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:04.277 18:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:04.277 18:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:04.277 18:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:04.277 18:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:04.277 18:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:04.277 18:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:04.277 18:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:04.277 18:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:04.277 18:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:04.277 18:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:04.277 18:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.536 18:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:04.536 "name": "Existed_Raid", 00:19:04.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.536 "strip_size_kb": 64, 00:19:04.536 "state": "configuring", 00:19:04.536 "raid_level": "concat", 00:19:04.536 "superblock": false, 00:19:04.536 "num_base_bdevs": 4, 00:19:04.536 "num_base_bdevs_discovered": 3, 00:19:04.536 "num_base_bdevs_operational": 4, 00:19:04.536 "base_bdevs_list": [ 00:19:04.536 { 00:19:04.536 "name": "BaseBdev1", 00:19:04.536 "uuid": "c914ad8d-0de7-4f07-b6be-a474d2eef61f", 00:19:04.536 "is_configured": true, 00:19:04.536 "data_offset": 0, 00:19:04.536 "data_size": 65536 00:19:04.536 }, 00:19:04.536 { 00:19:04.536 "name": "BaseBdev2", 00:19:04.536 "uuid": "c985659c-6ed6-48a2-a559-9b9cdc248336", 00:19:04.536 "is_configured": true, 00:19:04.536 "data_offset": 0, 00:19:04.536 "data_size": 65536 00:19:04.536 }, 00:19:04.536 { 00:19:04.536 "name": "BaseBdev3", 00:19:04.536 "uuid": "1a9a4b9b-bd27-49cc-bdbd-27c82bcbe066", 00:19:04.536 "is_configured": true, 00:19:04.536 "data_offset": 0, 00:19:04.536 "data_size": 65536 00:19:04.536 }, 00:19:04.536 { 00:19:04.536 "name": "BaseBdev4", 00:19:04.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.536 "is_configured": false, 00:19:04.536 "data_offset": 0, 00:19:04.536 "data_size": 0 00:19:04.536 } 00:19:04.536 ] 00:19:04.536 }' 00:19:04.536 18:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:04.536 18:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.472 18:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:05.472 [2024-07-15 18:33:50.968421] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:05.472 [2024-07-15 18:33:50.968458] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2160490 00:19:05.472 [2024-07-15 18:33:50.968464] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:19:05.472 [2024-07-15 18:33:50.968665] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x214c2d0 00:19:05.472 [2024-07-15 18:33:50.968794] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2160490 00:19:05.472 [2024-07-15 18:33:50.968802] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2160490 00:19:05.472 [2024-07-15 18:33:50.968982] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.472 BaseBdev4 00:19:05.472 18:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:19:05.472 18:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:19:05.472 18:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:05.472 18:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:05.472 18:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:05.472 18:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:05.472 18:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:05.731 18:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:05.989 [ 00:19:05.989 { 00:19:05.989 "name": "BaseBdev4", 00:19:05.989 "aliases": [ 00:19:05.989 "fd25a6f2-17f2-444e-829d-e973a38d346d" 00:19:05.989 ], 00:19:05.989 "product_name": "Malloc disk", 00:19:05.989 "block_size": 512, 00:19:05.989 "num_blocks": 65536, 00:19:05.989 "uuid": "fd25a6f2-17f2-444e-829d-e973a38d346d", 00:19:05.989 "assigned_rate_limits": { 00:19:05.989 "rw_ios_per_sec": 0, 00:19:05.989 "rw_mbytes_per_sec": 0, 00:19:05.989 "r_mbytes_per_sec": 0, 00:19:05.989 "w_mbytes_per_sec": 0 00:19:05.989 }, 00:19:05.989 "claimed": true, 00:19:05.989 "claim_type": "exclusive_write", 00:19:05.989 "zoned": false, 00:19:05.989 "supported_io_types": { 00:19:05.989 "read": true, 00:19:05.989 "write": true, 00:19:05.989 "unmap": true, 00:19:05.989 "flush": true, 00:19:05.989 "reset": true, 00:19:05.989 "nvme_admin": false, 00:19:05.989 "nvme_io": false, 00:19:05.989 "nvme_io_md": false, 00:19:05.989 "write_zeroes": true, 00:19:05.989 "zcopy": true, 00:19:05.989 "get_zone_info": false, 00:19:05.989 "zone_management": false, 00:19:05.989 "zone_append": false, 00:19:05.989 "compare": false, 00:19:05.989 "compare_and_write": false, 00:19:05.989 "abort": true, 00:19:05.989 "seek_hole": false, 00:19:05.989 "seek_data": false, 00:19:05.989 "copy": true, 00:19:05.989 "nvme_iov_md": false 00:19:05.989 }, 00:19:05.989 "memory_domains": [ 00:19:05.989 { 00:19:05.989 "dma_device_id": "system", 00:19:05.989 "dma_device_type": 1 00:19:05.989 }, 00:19:05.989 { 00:19:05.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.989 "dma_device_type": 2 00:19:05.989 } 00:19:05.989 ], 00:19:05.989 "driver_specific": {} 00:19:05.989 } 00:19:05.989 ] 00:19:05.989 18:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:05.989 18:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:05.989 18:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:05.989 18:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:05.989 18:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:05.989 18:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:05.989 18:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:05.989 18:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:05.989 18:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:05.989 18:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:05.989 18:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:05.989 18:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:05.989 18:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:05.989 18:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.989 18:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.248 18:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:06.248 "name": "Existed_Raid", 00:19:06.248 "uuid": "42d058ba-8295-491c-9392-a521216ce529", 00:19:06.248 "strip_size_kb": 64, 00:19:06.248 "state": "online", 00:19:06.248 "raid_level": "concat", 00:19:06.248 "superblock": false, 00:19:06.248 "num_base_bdevs": 4, 00:19:06.248 "num_base_bdevs_discovered": 4, 00:19:06.248 "num_base_bdevs_operational": 4, 00:19:06.248 "base_bdevs_list": [ 00:19:06.248 { 00:19:06.248 "name": "BaseBdev1", 00:19:06.248 "uuid": "c914ad8d-0de7-4f07-b6be-a474d2eef61f", 00:19:06.248 "is_configured": true, 00:19:06.248 "data_offset": 0, 00:19:06.248 "data_size": 65536 00:19:06.248 }, 00:19:06.248 { 00:19:06.248 "name": "BaseBdev2", 00:19:06.248 "uuid": "c985659c-6ed6-48a2-a559-9b9cdc248336", 00:19:06.248 "is_configured": true, 00:19:06.248 "data_offset": 0, 00:19:06.248 "data_size": 65536 00:19:06.248 }, 00:19:06.248 { 00:19:06.248 "name": "BaseBdev3", 00:19:06.248 "uuid": "1a9a4b9b-bd27-49cc-bdbd-27c82bcbe066", 00:19:06.248 "is_configured": true, 00:19:06.248 "data_offset": 0, 00:19:06.248 "data_size": 65536 00:19:06.248 }, 00:19:06.248 { 00:19:06.248 "name": "BaseBdev4", 00:19:06.248 "uuid": "fd25a6f2-17f2-444e-829d-e973a38d346d", 00:19:06.248 "is_configured": true, 00:19:06.248 "data_offset": 0, 00:19:06.248 "data_size": 65536 00:19:06.248 } 00:19:06.248 ] 00:19:06.248 }' 00:19:06.248 18:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:06.248 18:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.183 18:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:07.183 18:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:07.183 18:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:07.183 18:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:07.183 18:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:07.183 18:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:07.183 18:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:07.183 18:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:07.183 [2024-07-15 18:33:52.657350] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:07.183 18:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:07.183 "name": "Existed_Raid", 00:19:07.183 "aliases": [ 00:19:07.183 "42d058ba-8295-491c-9392-a521216ce529" 00:19:07.183 ], 00:19:07.183 "product_name": "Raid Volume", 00:19:07.183 "block_size": 512, 00:19:07.183 "num_blocks": 262144, 00:19:07.183 "uuid": "42d058ba-8295-491c-9392-a521216ce529", 00:19:07.183 "assigned_rate_limits": { 00:19:07.183 "rw_ios_per_sec": 0, 00:19:07.183 "rw_mbytes_per_sec": 0, 00:19:07.183 "r_mbytes_per_sec": 0, 00:19:07.183 "w_mbytes_per_sec": 0 00:19:07.183 }, 00:19:07.183 "claimed": false, 00:19:07.183 "zoned": false, 00:19:07.183 "supported_io_types": { 00:19:07.183 "read": true, 00:19:07.183 "write": true, 00:19:07.183 "unmap": true, 00:19:07.183 "flush": true, 00:19:07.183 "reset": true, 00:19:07.183 "nvme_admin": false, 00:19:07.183 "nvme_io": false, 00:19:07.183 "nvme_io_md": false, 00:19:07.183 "write_zeroes": true, 00:19:07.183 "zcopy": false, 00:19:07.183 "get_zone_info": false, 00:19:07.183 "zone_management": false, 00:19:07.183 "zone_append": false, 00:19:07.183 "compare": false, 00:19:07.183 "compare_and_write": false, 00:19:07.183 "abort": false, 00:19:07.183 "seek_hole": false, 00:19:07.183 "seek_data": false, 00:19:07.183 "copy": false, 00:19:07.183 "nvme_iov_md": false 00:19:07.183 }, 00:19:07.183 "memory_domains": [ 00:19:07.183 { 00:19:07.183 "dma_device_id": "system", 00:19:07.183 "dma_device_type": 1 00:19:07.183 }, 00:19:07.183 { 00:19:07.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.183 "dma_device_type": 2 00:19:07.183 }, 00:19:07.183 { 00:19:07.183 "dma_device_id": "system", 00:19:07.183 "dma_device_type": 1 00:19:07.183 }, 00:19:07.183 { 00:19:07.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.183 "dma_device_type": 2 00:19:07.183 }, 00:19:07.183 { 00:19:07.183 "dma_device_id": "system", 00:19:07.183 "dma_device_type": 1 00:19:07.183 }, 00:19:07.183 { 00:19:07.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.183 "dma_device_type": 2 00:19:07.183 }, 00:19:07.183 { 00:19:07.183 "dma_device_id": "system", 00:19:07.183 "dma_device_type": 1 00:19:07.183 }, 00:19:07.183 { 00:19:07.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.183 "dma_device_type": 2 00:19:07.183 } 00:19:07.183 ], 00:19:07.183 "driver_specific": { 00:19:07.183 "raid": { 00:19:07.183 "uuid": "42d058ba-8295-491c-9392-a521216ce529", 00:19:07.183 "strip_size_kb": 64, 00:19:07.183 "state": "online", 00:19:07.183 "raid_level": "concat", 00:19:07.183 "superblock": false, 00:19:07.183 "num_base_bdevs": 4, 00:19:07.183 "num_base_bdevs_discovered": 4, 00:19:07.183 "num_base_bdevs_operational": 4, 00:19:07.183 "base_bdevs_list": [ 00:19:07.183 { 00:19:07.183 "name": "BaseBdev1", 00:19:07.183 "uuid": "c914ad8d-0de7-4f07-b6be-a474d2eef61f", 00:19:07.183 "is_configured": true, 00:19:07.183 "data_offset": 0, 00:19:07.183 "data_size": 65536 00:19:07.183 }, 00:19:07.183 { 00:19:07.183 "name": "BaseBdev2", 00:19:07.183 "uuid": "c985659c-6ed6-48a2-a559-9b9cdc248336", 00:19:07.183 "is_configured": true, 00:19:07.183 "data_offset": 0, 00:19:07.183 "data_size": 65536 00:19:07.183 }, 00:19:07.183 { 00:19:07.183 "name": "BaseBdev3", 00:19:07.183 "uuid": "1a9a4b9b-bd27-49cc-bdbd-27c82bcbe066", 00:19:07.183 "is_configured": true, 00:19:07.183 "data_offset": 0, 00:19:07.183 "data_size": 65536 00:19:07.183 }, 00:19:07.183 { 00:19:07.183 "name": "BaseBdev4", 00:19:07.183 "uuid": "fd25a6f2-17f2-444e-829d-e973a38d346d", 00:19:07.183 "is_configured": true, 00:19:07.183 "data_offset": 0, 00:19:07.183 "data_size": 65536 00:19:07.183 } 00:19:07.183 ] 00:19:07.183 } 00:19:07.183 } 00:19:07.183 }' 00:19:07.184 18:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:07.184 18:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:07.184 BaseBdev2 00:19:07.184 BaseBdev3 00:19:07.184 BaseBdev4' 00:19:07.184 18:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:07.184 18:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:07.184 18:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:07.752 18:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:07.752 "name": "BaseBdev1", 00:19:07.752 "aliases": [ 00:19:07.753 "c914ad8d-0de7-4f07-b6be-a474d2eef61f" 00:19:07.753 ], 00:19:07.753 "product_name": "Malloc disk", 00:19:07.753 "block_size": 512, 00:19:07.753 "num_blocks": 65536, 00:19:07.753 "uuid": "c914ad8d-0de7-4f07-b6be-a474d2eef61f", 00:19:07.753 "assigned_rate_limits": { 00:19:07.753 "rw_ios_per_sec": 0, 00:19:07.753 "rw_mbytes_per_sec": 0, 00:19:07.753 "r_mbytes_per_sec": 0, 00:19:07.753 "w_mbytes_per_sec": 0 00:19:07.753 }, 00:19:07.753 "claimed": true, 00:19:07.753 "claim_type": "exclusive_write", 00:19:07.753 "zoned": false, 00:19:07.753 "supported_io_types": { 00:19:07.753 "read": true, 00:19:07.753 "write": true, 00:19:07.753 "unmap": true, 00:19:07.753 "flush": true, 00:19:07.753 "reset": true, 00:19:07.753 "nvme_admin": false, 00:19:07.753 "nvme_io": false, 00:19:07.753 "nvme_io_md": false, 00:19:07.753 "write_zeroes": true, 00:19:07.753 "zcopy": true, 00:19:07.753 "get_zone_info": false, 00:19:07.753 "zone_management": false, 00:19:07.753 "zone_append": false, 00:19:07.753 "compare": false, 00:19:07.753 "compare_and_write": false, 00:19:07.753 "abort": true, 00:19:07.753 "seek_hole": false, 00:19:07.753 "seek_data": false, 00:19:07.753 "copy": true, 00:19:07.753 "nvme_iov_md": false 00:19:07.753 }, 00:19:07.753 "memory_domains": [ 00:19:07.753 { 00:19:07.753 "dma_device_id": "system", 00:19:07.753 "dma_device_type": 1 00:19:07.753 }, 00:19:07.753 { 00:19:07.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.753 "dma_device_type": 2 00:19:07.753 } 00:19:07.753 ], 00:19:07.753 "driver_specific": {} 00:19:07.753 }' 00:19:07.753 18:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:07.753 18:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:07.753 18:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:07.753 18:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:07.753 18:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:07.753 18:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:07.753 18:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:07.753 18:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:07.753 18:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:07.753 18:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:08.011 18:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:08.012 18:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:08.012 18:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:08.012 18:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:08.012 18:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:08.270 18:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:08.270 "name": "BaseBdev2", 00:19:08.270 "aliases": [ 00:19:08.270 "c985659c-6ed6-48a2-a559-9b9cdc248336" 00:19:08.270 ], 00:19:08.270 "product_name": "Malloc disk", 00:19:08.270 "block_size": 512, 00:19:08.270 "num_blocks": 65536, 00:19:08.270 "uuid": "c985659c-6ed6-48a2-a559-9b9cdc248336", 00:19:08.270 "assigned_rate_limits": { 00:19:08.270 "rw_ios_per_sec": 0, 00:19:08.270 "rw_mbytes_per_sec": 0, 00:19:08.270 "r_mbytes_per_sec": 0, 00:19:08.270 "w_mbytes_per_sec": 0 00:19:08.270 }, 00:19:08.270 "claimed": true, 00:19:08.270 "claim_type": "exclusive_write", 00:19:08.270 "zoned": false, 00:19:08.270 "supported_io_types": { 00:19:08.270 "read": true, 00:19:08.270 "write": true, 00:19:08.270 "unmap": true, 00:19:08.270 "flush": true, 00:19:08.270 "reset": true, 00:19:08.270 "nvme_admin": false, 00:19:08.270 "nvme_io": false, 00:19:08.270 "nvme_io_md": false, 00:19:08.270 "write_zeroes": true, 00:19:08.270 "zcopy": true, 00:19:08.270 "get_zone_info": false, 00:19:08.270 "zone_management": false, 00:19:08.270 "zone_append": false, 00:19:08.270 "compare": false, 00:19:08.270 "compare_and_write": false, 00:19:08.270 "abort": true, 00:19:08.270 "seek_hole": false, 00:19:08.270 "seek_data": false, 00:19:08.270 "copy": true, 00:19:08.270 "nvme_iov_md": false 00:19:08.270 }, 00:19:08.270 "memory_domains": [ 00:19:08.270 { 00:19:08.271 "dma_device_id": "system", 00:19:08.271 "dma_device_type": 1 00:19:08.271 }, 00:19:08.271 { 00:19:08.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:08.271 "dma_device_type": 2 00:19:08.271 } 00:19:08.271 ], 00:19:08.271 "driver_specific": {} 00:19:08.271 }' 00:19:08.271 18:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:08.271 18:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:08.271 18:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:08.271 18:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:08.271 18:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:08.530 18:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:08.530 18:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:08.530 18:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:08.530 18:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:08.530 18:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:08.530 18:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:08.530 18:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:08.530 18:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:08.530 18:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:08.530 18:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:08.788 18:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:08.788 "name": "BaseBdev3", 00:19:08.788 "aliases": [ 00:19:08.788 "1a9a4b9b-bd27-49cc-bdbd-27c82bcbe066" 00:19:08.788 ], 00:19:08.788 "product_name": "Malloc disk", 00:19:08.788 "block_size": 512, 00:19:08.788 "num_blocks": 65536, 00:19:08.788 "uuid": "1a9a4b9b-bd27-49cc-bdbd-27c82bcbe066", 00:19:08.788 "assigned_rate_limits": { 00:19:08.788 "rw_ios_per_sec": 0, 00:19:08.788 "rw_mbytes_per_sec": 0, 00:19:08.788 "r_mbytes_per_sec": 0, 00:19:08.788 "w_mbytes_per_sec": 0 00:19:08.788 }, 00:19:08.788 "claimed": true, 00:19:08.788 "claim_type": "exclusive_write", 00:19:08.788 "zoned": false, 00:19:08.788 "supported_io_types": { 00:19:08.788 "read": true, 00:19:08.788 "write": true, 00:19:08.788 "unmap": true, 00:19:08.788 "flush": true, 00:19:08.788 "reset": true, 00:19:08.788 "nvme_admin": false, 00:19:08.788 "nvme_io": false, 00:19:08.788 "nvme_io_md": false, 00:19:08.788 "write_zeroes": true, 00:19:08.788 "zcopy": true, 00:19:08.788 "get_zone_info": false, 00:19:08.788 "zone_management": false, 00:19:08.788 "zone_append": false, 00:19:08.788 "compare": false, 00:19:08.788 "compare_and_write": false, 00:19:08.788 "abort": true, 00:19:08.788 "seek_hole": false, 00:19:08.788 "seek_data": false, 00:19:08.788 "copy": true, 00:19:08.788 "nvme_iov_md": false 00:19:08.788 }, 00:19:08.788 "memory_domains": [ 00:19:08.788 { 00:19:08.788 "dma_device_id": "system", 00:19:08.788 "dma_device_type": 1 00:19:08.788 }, 00:19:08.788 { 00:19:08.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:08.788 "dma_device_type": 2 00:19:08.788 } 00:19:08.788 ], 00:19:08.788 "driver_specific": {} 00:19:08.788 }' 00:19:08.788 18:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:08.788 18:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:09.047 18:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:09.047 18:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:09.047 18:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:09.047 18:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:09.047 18:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:09.047 18:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:09.047 18:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:09.047 18:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:09.047 18:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:09.305 18:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:09.305 18:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:09.305 18:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:19:09.305 18:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:09.564 18:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:09.564 "name": "BaseBdev4", 00:19:09.564 "aliases": [ 00:19:09.564 "fd25a6f2-17f2-444e-829d-e973a38d346d" 00:19:09.564 ], 00:19:09.564 "product_name": "Malloc disk", 00:19:09.564 "block_size": 512, 00:19:09.564 "num_blocks": 65536, 00:19:09.564 "uuid": "fd25a6f2-17f2-444e-829d-e973a38d346d", 00:19:09.564 "assigned_rate_limits": { 00:19:09.564 "rw_ios_per_sec": 0, 00:19:09.564 "rw_mbytes_per_sec": 0, 00:19:09.564 "r_mbytes_per_sec": 0, 00:19:09.564 "w_mbytes_per_sec": 0 00:19:09.564 }, 00:19:09.564 "claimed": true, 00:19:09.564 "claim_type": "exclusive_write", 00:19:09.564 "zoned": false, 00:19:09.564 "supported_io_types": { 00:19:09.564 "read": true, 00:19:09.564 "write": true, 00:19:09.564 "unmap": true, 00:19:09.564 "flush": true, 00:19:09.564 "reset": true, 00:19:09.564 "nvme_admin": false, 00:19:09.564 "nvme_io": false, 00:19:09.564 "nvme_io_md": false, 00:19:09.564 "write_zeroes": true, 00:19:09.564 "zcopy": true, 00:19:09.564 "get_zone_info": false, 00:19:09.564 "zone_management": false, 00:19:09.564 "zone_append": false, 00:19:09.564 "compare": false, 00:19:09.564 "compare_and_write": false, 00:19:09.564 "abort": true, 00:19:09.564 "seek_hole": false, 00:19:09.564 "seek_data": false, 00:19:09.564 "copy": true, 00:19:09.564 "nvme_iov_md": false 00:19:09.564 }, 00:19:09.564 "memory_domains": [ 00:19:09.564 { 00:19:09.564 "dma_device_id": "system", 00:19:09.564 "dma_device_type": 1 00:19:09.564 }, 00:19:09.564 { 00:19:09.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.564 "dma_device_type": 2 00:19:09.564 } 00:19:09.564 ], 00:19:09.564 "driver_specific": {} 00:19:09.564 }' 00:19:09.564 18:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:09.564 18:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:09.564 18:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:09.564 18:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:09.564 18:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:09.564 18:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:09.564 18:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:09.564 18:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:09.823 18:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:09.823 18:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:09.823 18:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:09.823 18:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:09.823 18:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:10.082 [2024-07-15 18:33:55.476630] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:10.082 [2024-07-15 18:33:55.476658] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:10.082 [2024-07-15 18:33:55.476704] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:10.082 18:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:10.082 18:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:19:10.082 18:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:10.082 18:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:10.082 18:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:19:10.082 18:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:19:10.082 18:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:10.082 18:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:19:10.082 18:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:10.082 18:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:10.082 18:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:10.082 18:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:10.082 18:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:10.082 18:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:10.082 18:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:10.082 18:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.082 18:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:10.342 18:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:10.342 "name": "Existed_Raid", 00:19:10.342 "uuid": "42d058ba-8295-491c-9392-a521216ce529", 00:19:10.342 "strip_size_kb": 64, 00:19:10.342 "state": "offline", 00:19:10.342 "raid_level": "concat", 00:19:10.342 "superblock": false, 00:19:10.342 "num_base_bdevs": 4, 00:19:10.342 "num_base_bdevs_discovered": 3, 00:19:10.342 "num_base_bdevs_operational": 3, 00:19:10.342 "base_bdevs_list": [ 00:19:10.342 { 00:19:10.342 "name": null, 00:19:10.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.342 "is_configured": false, 00:19:10.342 "data_offset": 0, 00:19:10.342 "data_size": 65536 00:19:10.342 }, 00:19:10.342 { 00:19:10.342 "name": "BaseBdev2", 00:19:10.342 "uuid": "c985659c-6ed6-48a2-a559-9b9cdc248336", 00:19:10.342 "is_configured": true, 00:19:10.342 "data_offset": 0, 00:19:10.342 "data_size": 65536 00:19:10.342 }, 00:19:10.342 { 00:19:10.342 "name": "BaseBdev3", 00:19:10.342 "uuid": "1a9a4b9b-bd27-49cc-bdbd-27c82bcbe066", 00:19:10.342 "is_configured": true, 00:19:10.342 "data_offset": 0, 00:19:10.342 "data_size": 65536 00:19:10.342 }, 00:19:10.342 { 00:19:10.342 "name": "BaseBdev4", 00:19:10.342 "uuid": "fd25a6f2-17f2-444e-829d-e973a38d346d", 00:19:10.342 "is_configured": true, 00:19:10.342 "data_offset": 0, 00:19:10.342 "data_size": 65536 00:19:10.342 } 00:19:10.342 ] 00:19:10.342 }' 00:19:10.342 18:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:10.342 18:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.910 18:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:10.910 18:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:10.910 18:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.910 18:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:11.169 18:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:11.169 18:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:11.169 18:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:11.428 [2024-07-15 18:33:56.873471] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:11.428 18:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:11.428 18:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:11.428 18:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.428 18:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:11.686 18:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:11.686 18:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:11.686 18:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:11.985 [2024-07-15 18:33:57.397368] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:11.985 18:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:11.985 18:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:11.985 18:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.985 18:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:12.244 18:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:12.244 18:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:12.244 18:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:12.503 [2024-07-15 18:33:57.921368] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:12.503 [2024-07-15 18:33:57.921411] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2160490 name Existed_Raid, state offline 00:19:12.503 18:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:12.503 18:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:12.503 18:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.503 18:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:12.762 18:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:12.762 18:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:12.762 18:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:19:12.762 18:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:12.762 18:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:12.762 18:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:13.021 BaseBdev2 00:19:13.021 18:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:13.021 18:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:13.021 18:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:13.021 18:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:13.021 18:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:13.021 18:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:13.021 18:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:13.279 18:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:13.538 [ 00:19:13.538 { 00:19:13.538 "name": "BaseBdev2", 00:19:13.538 "aliases": [ 00:19:13.538 "443885d7-f7e6-49cb-8632-43fe760f2e7a" 00:19:13.538 ], 00:19:13.538 "product_name": "Malloc disk", 00:19:13.538 "block_size": 512, 00:19:13.538 "num_blocks": 65536, 00:19:13.538 "uuid": "443885d7-f7e6-49cb-8632-43fe760f2e7a", 00:19:13.538 "assigned_rate_limits": { 00:19:13.538 "rw_ios_per_sec": 0, 00:19:13.538 "rw_mbytes_per_sec": 0, 00:19:13.538 "r_mbytes_per_sec": 0, 00:19:13.538 "w_mbytes_per_sec": 0 00:19:13.538 }, 00:19:13.538 "claimed": false, 00:19:13.538 "zoned": false, 00:19:13.538 "supported_io_types": { 00:19:13.538 "read": true, 00:19:13.538 "write": true, 00:19:13.538 "unmap": true, 00:19:13.538 "flush": true, 00:19:13.538 "reset": true, 00:19:13.538 "nvme_admin": false, 00:19:13.538 "nvme_io": false, 00:19:13.538 "nvme_io_md": false, 00:19:13.538 "write_zeroes": true, 00:19:13.538 "zcopy": true, 00:19:13.538 "get_zone_info": false, 00:19:13.538 "zone_management": false, 00:19:13.538 "zone_append": false, 00:19:13.538 "compare": false, 00:19:13.538 "compare_and_write": false, 00:19:13.538 "abort": true, 00:19:13.538 "seek_hole": false, 00:19:13.538 "seek_data": false, 00:19:13.538 "copy": true, 00:19:13.538 "nvme_iov_md": false 00:19:13.538 }, 00:19:13.538 "memory_domains": [ 00:19:13.538 { 00:19:13.538 "dma_device_id": "system", 00:19:13.538 "dma_device_type": 1 00:19:13.538 }, 00:19:13.538 { 00:19:13.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.538 "dma_device_type": 2 00:19:13.538 } 00:19:13.538 ], 00:19:13.538 "driver_specific": {} 00:19:13.538 } 00:19:13.538 ] 00:19:13.538 18:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:13.538 18:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:13.538 18:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:13.538 18:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:13.796 BaseBdev3 00:19:13.796 18:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:13.796 18:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:13.796 18:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:13.796 18:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:13.796 18:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:13.796 18:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:13.796 18:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:14.055 18:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:14.313 [ 00:19:14.313 { 00:19:14.313 "name": "BaseBdev3", 00:19:14.313 "aliases": [ 00:19:14.313 "3045d446-c78f-4882-9039-a4fa948c4b2f" 00:19:14.313 ], 00:19:14.313 "product_name": "Malloc disk", 00:19:14.313 "block_size": 512, 00:19:14.313 "num_blocks": 65536, 00:19:14.313 "uuid": "3045d446-c78f-4882-9039-a4fa948c4b2f", 00:19:14.313 "assigned_rate_limits": { 00:19:14.313 "rw_ios_per_sec": 0, 00:19:14.313 "rw_mbytes_per_sec": 0, 00:19:14.313 "r_mbytes_per_sec": 0, 00:19:14.313 "w_mbytes_per_sec": 0 00:19:14.313 }, 00:19:14.313 "claimed": false, 00:19:14.313 "zoned": false, 00:19:14.313 "supported_io_types": { 00:19:14.313 "read": true, 00:19:14.313 "write": true, 00:19:14.313 "unmap": true, 00:19:14.313 "flush": true, 00:19:14.313 "reset": true, 00:19:14.313 "nvme_admin": false, 00:19:14.313 "nvme_io": false, 00:19:14.313 "nvme_io_md": false, 00:19:14.313 "write_zeroes": true, 00:19:14.313 "zcopy": true, 00:19:14.313 "get_zone_info": false, 00:19:14.313 "zone_management": false, 00:19:14.313 "zone_append": false, 00:19:14.313 "compare": false, 00:19:14.313 "compare_and_write": false, 00:19:14.313 "abort": true, 00:19:14.313 "seek_hole": false, 00:19:14.313 "seek_data": false, 00:19:14.313 "copy": true, 00:19:14.313 "nvme_iov_md": false 00:19:14.313 }, 00:19:14.313 "memory_domains": [ 00:19:14.313 { 00:19:14.313 "dma_device_id": "system", 00:19:14.313 "dma_device_type": 1 00:19:14.313 }, 00:19:14.313 { 00:19:14.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.313 "dma_device_type": 2 00:19:14.313 } 00:19:14.313 ], 00:19:14.313 "driver_specific": {} 00:19:14.313 } 00:19:14.313 ] 00:19:14.313 18:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:14.313 18:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:14.313 18:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:14.313 18:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:14.572 BaseBdev4 00:19:14.572 18:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:19:14.572 18:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:19:14.572 18:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:14.572 18:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:14.572 18:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:14.572 18:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:14.572 18:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:14.830 18:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:15.089 [ 00:19:15.089 { 00:19:15.089 "name": "BaseBdev4", 00:19:15.089 "aliases": [ 00:19:15.089 "666d8993-5390-491d-9982-b866387a3ef8" 00:19:15.089 ], 00:19:15.089 "product_name": "Malloc disk", 00:19:15.089 "block_size": 512, 00:19:15.089 "num_blocks": 65536, 00:19:15.089 "uuid": "666d8993-5390-491d-9982-b866387a3ef8", 00:19:15.089 "assigned_rate_limits": { 00:19:15.089 "rw_ios_per_sec": 0, 00:19:15.089 "rw_mbytes_per_sec": 0, 00:19:15.089 "r_mbytes_per_sec": 0, 00:19:15.089 "w_mbytes_per_sec": 0 00:19:15.089 }, 00:19:15.089 "claimed": false, 00:19:15.089 "zoned": false, 00:19:15.089 "supported_io_types": { 00:19:15.089 "read": true, 00:19:15.089 "write": true, 00:19:15.089 "unmap": true, 00:19:15.089 "flush": true, 00:19:15.089 "reset": true, 00:19:15.089 "nvme_admin": false, 00:19:15.089 "nvme_io": false, 00:19:15.089 "nvme_io_md": false, 00:19:15.089 "write_zeroes": true, 00:19:15.089 "zcopy": true, 00:19:15.089 "get_zone_info": false, 00:19:15.089 "zone_management": false, 00:19:15.089 "zone_append": false, 00:19:15.089 "compare": false, 00:19:15.089 "compare_and_write": false, 00:19:15.089 "abort": true, 00:19:15.089 "seek_hole": false, 00:19:15.089 "seek_data": false, 00:19:15.089 "copy": true, 00:19:15.089 "nvme_iov_md": false 00:19:15.089 }, 00:19:15.089 "memory_domains": [ 00:19:15.089 { 00:19:15.089 "dma_device_id": "system", 00:19:15.089 "dma_device_type": 1 00:19:15.089 }, 00:19:15.089 { 00:19:15.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:15.089 "dma_device_type": 2 00:19:15.089 } 00:19:15.089 ], 00:19:15.089 "driver_specific": {} 00:19:15.089 } 00:19:15.089 ] 00:19:15.089 18:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:15.089 18:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:15.089 18:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:15.089 18:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:15.348 [2024-07-15 18:34:00.728937] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:15.348 [2024-07-15 18:34:00.728984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:15.348 [2024-07-15 18:34:00.729003] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:15.348 [2024-07-15 18:34:00.730397] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:15.348 [2024-07-15 18:34:00.730440] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:15.348 18:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:15.348 18:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:15.348 18:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:15.348 18:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:15.348 18:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:15.348 18:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:15.348 18:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:15.348 18:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:15.348 18:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:15.348 18:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:15.348 18:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.348 18:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.606 18:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:15.606 "name": "Existed_Raid", 00:19:15.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.606 "strip_size_kb": 64, 00:19:15.606 "state": "configuring", 00:19:15.606 "raid_level": "concat", 00:19:15.606 "superblock": false, 00:19:15.606 "num_base_bdevs": 4, 00:19:15.606 "num_base_bdevs_discovered": 3, 00:19:15.606 "num_base_bdevs_operational": 4, 00:19:15.606 "base_bdevs_list": [ 00:19:15.606 { 00:19:15.606 "name": "BaseBdev1", 00:19:15.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.606 "is_configured": false, 00:19:15.606 "data_offset": 0, 00:19:15.606 "data_size": 0 00:19:15.606 }, 00:19:15.606 { 00:19:15.606 "name": "BaseBdev2", 00:19:15.606 "uuid": "443885d7-f7e6-49cb-8632-43fe760f2e7a", 00:19:15.606 "is_configured": true, 00:19:15.606 "data_offset": 0, 00:19:15.606 "data_size": 65536 00:19:15.606 }, 00:19:15.606 { 00:19:15.606 "name": "BaseBdev3", 00:19:15.606 "uuid": "3045d446-c78f-4882-9039-a4fa948c4b2f", 00:19:15.606 "is_configured": true, 00:19:15.607 "data_offset": 0, 00:19:15.607 "data_size": 65536 00:19:15.607 }, 00:19:15.607 { 00:19:15.607 "name": "BaseBdev4", 00:19:15.607 "uuid": "666d8993-5390-491d-9982-b866387a3ef8", 00:19:15.607 "is_configured": true, 00:19:15.607 "data_offset": 0, 00:19:15.607 "data_size": 65536 00:19:15.607 } 00:19:15.607 ] 00:19:15.607 }' 00:19:15.607 18:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:15.607 18:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.542 18:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:16.542 [2024-07-15 18:34:01.980287] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:16.542 18:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:16.542 18:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:16.542 18:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:16.542 18:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:16.542 18:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:16.542 18:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:16.542 18:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:16.542 18:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:16.542 18:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:16.542 18:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:16.542 18:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.542 18:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:16.806 18:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:16.806 "name": "Existed_Raid", 00:19:16.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.806 "strip_size_kb": 64, 00:19:16.806 "state": "configuring", 00:19:16.806 "raid_level": "concat", 00:19:16.806 "superblock": false, 00:19:16.806 "num_base_bdevs": 4, 00:19:16.806 "num_base_bdevs_discovered": 2, 00:19:16.806 "num_base_bdevs_operational": 4, 00:19:16.806 "base_bdevs_list": [ 00:19:16.806 { 00:19:16.806 "name": "BaseBdev1", 00:19:16.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.806 "is_configured": false, 00:19:16.806 "data_offset": 0, 00:19:16.806 "data_size": 0 00:19:16.806 }, 00:19:16.806 { 00:19:16.806 "name": null, 00:19:16.806 "uuid": "443885d7-f7e6-49cb-8632-43fe760f2e7a", 00:19:16.806 "is_configured": false, 00:19:16.806 "data_offset": 0, 00:19:16.806 "data_size": 65536 00:19:16.806 }, 00:19:16.806 { 00:19:16.806 "name": "BaseBdev3", 00:19:16.806 "uuid": "3045d446-c78f-4882-9039-a4fa948c4b2f", 00:19:16.806 "is_configured": true, 00:19:16.806 "data_offset": 0, 00:19:16.806 "data_size": 65536 00:19:16.806 }, 00:19:16.806 { 00:19:16.806 "name": "BaseBdev4", 00:19:16.806 "uuid": "666d8993-5390-491d-9982-b866387a3ef8", 00:19:16.806 "is_configured": true, 00:19:16.806 "data_offset": 0, 00:19:16.806 "data_size": 65536 00:19:16.806 } 00:19:16.806 ] 00:19:16.806 }' 00:19:16.806 18:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:16.806 18:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.373 18:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.373 18:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:17.632 18:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:17.632 18:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:17.890 [2024-07-15 18:34:03.383257] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:17.890 BaseBdev1 00:19:17.890 18:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:17.890 18:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:17.890 18:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:17.890 18:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:17.890 18:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:17.890 18:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:17.890 18:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:18.148 18:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:18.406 [ 00:19:18.406 { 00:19:18.406 "name": "BaseBdev1", 00:19:18.406 "aliases": [ 00:19:18.406 "9e2a1ce4-89c7-49e5-847e-d83384f44aad" 00:19:18.406 ], 00:19:18.406 "product_name": "Malloc disk", 00:19:18.406 "block_size": 512, 00:19:18.406 "num_blocks": 65536, 00:19:18.406 "uuid": "9e2a1ce4-89c7-49e5-847e-d83384f44aad", 00:19:18.406 "assigned_rate_limits": { 00:19:18.406 "rw_ios_per_sec": 0, 00:19:18.406 "rw_mbytes_per_sec": 0, 00:19:18.406 "r_mbytes_per_sec": 0, 00:19:18.406 "w_mbytes_per_sec": 0 00:19:18.406 }, 00:19:18.406 "claimed": true, 00:19:18.406 "claim_type": "exclusive_write", 00:19:18.406 "zoned": false, 00:19:18.406 "supported_io_types": { 00:19:18.406 "read": true, 00:19:18.406 "write": true, 00:19:18.406 "unmap": true, 00:19:18.406 "flush": true, 00:19:18.406 "reset": true, 00:19:18.406 "nvme_admin": false, 00:19:18.406 "nvme_io": false, 00:19:18.406 "nvme_io_md": false, 00:19:18.406 "write_zeroes": true, 00:19:18.406 "zcopy": true, 00:19:18.406 "get_zone_info": false, 00:19:18.406 "zone_management": false, 00:19:18.406 "zone_append": false, 00:19:18.406 "compare": false, 00:19:18.406 "compare_and_write": false, 00:19:18.406 "abort": true, 00:19:18.406 "seek_hole": false, 00:19:18.406 "seek_data": false, 00:19:18.406 "copy": true, 00:19:18.406 "nvme_iov_md": false 00:19:18.406 }, 00:19:18.406 "memory_domains": [ 00:19:18.406 { 00:19:18.406 "dma_device_id": "system", 00:19:18.406 "dma_device_type": 1 00:19:18.406 }, 00:19:18.406 { 00:19:18.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:18.406 "dma_device_type": 2 00:19:18.406 } 00:19:18.406 ], 00:19:18.406 "driver_specific": {} 00:19:18.406 } 00:19:18.406 ] 00:19:18.406 18:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:18.406 18:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:18.406 18:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:18.406 18:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:18.406 18:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:18.406 18:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:18.406 18:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:18.406 18:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:18.406 18:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:18.406 18:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:18.406 18:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:18.406 18:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.406 18:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:18.684 18:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:18.684 "name": "Existed_Raid", 00:19:18.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.684 "strip_size_kb": 64, 00:19:18.684 "state": "configuring", 00:19:18.684 "raid_level": "concat", 00:19:18.684 "superblock": false, 00:19:18.684 "num_base_bdevs": 4, 00:19:18.684 "num_base_bdevs_discovered": 3, 00:19:18.684 "num_base_bdevs_operational": 4, 00:19:18.684 "base_bdevs_list": [ 00:19:18.684 { 00:19:18.684 "name": "BaseBdev1", 00:19:18.684 "uuid": "9e2a1ce4-89c7-49e5-847e-d83384f44aad", 00:19:18.684 "is_configured": true, 00:19:18.684 "data_offset": 0, 00:19:18.684 "data_size": 65536 00:19:18.684 }, 00:19:18.684 { 00:19:18.684 "name": null, 00:19:18.684 "uuid": "443885d7-f7e6-49cb-8632-43fe760f2e7a", 00:19:18.684 "is_configured": false, 00:19:18.684 "data_offset": 0, 00:19:18.684 "data_size": 65536 00:19:18.684 }, 00:19:18.684 { 00:19:18.684 "name": "BaseBdev3", 00:19:18.684 "uuid": "3045d446-c78f-4882-9039-a4fa948c4b2f", 00:19:18.684 "is_configured": true, 00:19:18.684 "data_offset": 0, 00:19:18.684 "data_size": 65536 00:19:18.684 }, 00:19:18.684 { 00:19:18.684 "name": "BaseBdev4", 00:19:18.684 "uuid": "666d8993-5390-491d-9982-b866387a3ef8", 00:19:18.684 "is_configured": true, 00:19:18.684 "data_offset": 0, 00:19:18.684 "data_size": 65536 00:19:18.684 } 00:19:18.684 ] 00:19:18.684 }' 00:19:18.684 18:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:18.684 18:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.251 18:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.251 18:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:19.509 18:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:19.509 18:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:19.767 [2024-07-15 18:34:05.176114] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:19.767 18:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:19.767 18:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:19.767 18:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:19.767 18:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:19.767 18:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:19.767 18:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:19.768 18:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:19.768 18:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:19.768 18:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:19.768 18:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:19.768 18:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.768 18:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:20.027 18:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:20.027 "name": "Existed_Raid", 00:19:20.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.027 "strip_size_kb": 64, 00:19:20.027 "state": "configuring", 00:19:20.027 "raid_level": "concat", 00:19:20.027 "superblock": false, 00:19:20.027 "num_base_bdevs": 4, 00:19:20.027 "num_base_bdevs_discovered": 2, 00:19:20.027 "num_base_bdevs_operational": 4, 00:19:20.027 "base_bdevs_list": [ 00:19:20.027 { 00:19:20.027 "name": "BaseBdev1", 00:19:20.027 "uuid": "9e2a1ce4-89c7-49e5-847e-d83384f44aad", 00:19:20.027 "is_configured": true, 00:19:20.027 "data_offset": 0, 00:19:20.027 "data_size": 65536 00:19:20.027 }, 00:19:20.027 { 00:19:20.027 "name": null, 00:19:20.027 "uuid": "443885d7-f7e6-49cb-8632-43fe760f2e7a", 00:19:20.027 "is_configured": false, 00:19:20.027 "data_offset": 0, 00:19:20.027 "data_size": 65536 00:19:20.027 }, 00:19:20.027 { 00:19:20.027 "name": null, 00:19:20.027 "uuid": "3045d446-c78f-4882-9039-a4fa948c4b2f", 00:19:20.027 "is_configured": false, 00:19:20.027 "data_offset": 0, 00:19:20.027 "data_size": 65536 00:19:20.027 }, 00:19:20.027 { 00:19:20.027 "name": "BaseBdev4", 00:19:20.027 "uuid": "666d8993-5390-491d-9982-b866387a3ef8", 00:19:20.027 "is_configured": true, 00:19:20.027 "data_offset": 0, 00:19:20.027 "data_size": 65536 00:19:20.027 } 00:19:20.027 ] 00:19:20.027 }' 00:19:20.027 18:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:20.027 18:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.594 18:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.594 18:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:20.853 18:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:20.853 18:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:21.113 [2024-07-15 18:34:06.451573] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:21.113 18:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:21.113 18:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:21.113 18:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:21.113 18:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:21.113 18:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:21.113 18:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:21.113 18:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:21.113 18:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:21.113 18:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:21.113 18:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:21.113 18:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.113 18:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:21.680 18:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:21.680 "name": "Existed_Raid", 00:19:21.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.680 "strip_size_kb": 64, 00:19:21.680 "state": "configuring", 00:19:21.680 "raid_level": "concat", 00:19:21.680 "superblock": false, 00:19:21.680 "num_base_bdevs": 4, 00:19:21.680 "num_base_bdevs_discovered": 3, 00:19:21.680 "num_base_bdevs_operational": 4, 00:19:21.680 "base_bdevs_list": [ 00:19:21.681 { 00:19:21.681 "name": "BaseBdev1", 00:19:21.681 "uuid": "9e2a1ce4-89c7-49e5-847e-d83384f44aad", 00:19:21.681 "is_configured": true, 00:19:21.681 "data_offset": 0, 00:19:21.681 "data_size": 65536 00:19:21.681 }, 00:19:21.681 { 00:19:21.681 "name": null, 00:19:21.681 "uuid": "443885d7-f7e6-49cb-8632-43fe760f2e7a", 00:19:21.681 "is_configured": false, 00:19:21.681 "data_offset": 0, 00:19:21.681 "data_size": 65536 00:19:21.681 }, 00:19:21.681 { 00:19:21.681 "name": "BaseBdev3", 00:19:21.681 "uuid": "3045d446-c78f-4882-9039-a4fa948c4b2f", 00:19:21.681 "is_configured": true, 00:19:21.681 "data_offset": 0, 00:19:21.681 "data_size": 65536 00:19:21.681 }, 00:19:21.681 { 00:19:21.681 "name": "BaseBdev4", 00:19:21.681 "uuid": "666d8993-5390-491d-9982-b866387a3ef8", 00:19:21.681 "is_configured": true, 00:19:21.681 "data_offset": 0, 00:19:21.681 "data_size": 65536 00:19:21.681 } 00:19:21.681 ] 00:19:21.681 }' 00:19:21.681 18:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:21.681 18:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.248 18:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.248 18:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:22.516 18:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:22.516 18:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:23.089 [2024-07-15 18:34:08.436937] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:23.089 18:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:23.089 18:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:23.089 18:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:23.089 18:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:23.089 18:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:23.089 18:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:23.089 18:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:23.089 18:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:23.089 18:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:23.089 18:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:23.089 18:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.089 18:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:23.664 18:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:23.664 "name": "Existed_Raid", 00:19:23.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.664 "strip_size_kb": 64, 00:19:23.664 "state": "configuring", 00:19:23.664 "raid_level": "concat", 00:19:23.664 "superblock": false, 00:19:23.664 "num_base_bdevs": 4, 00:19:23.664 "num_base_bdevs_discovered": 2, 00:19:23.664 "num_base_bdevs_operational": 4, 00:19:23.664 "base_bdevs_list": [ 00:19:23.664 { 00:19:23.664 "name": null, 00:19:23.664 "uuid": "9e2a1ce4-89c7-49e5-847e-d83384f44aad", 00:19:23.664 "is_configured": false, 00:19:23.664 "data_offset": 0, 00:19:23.664 "data_size": 65536 00:19:23.664 }, 00:19:23.664 { 00:19:23.664 "name": null, 00:19:23.664 "uuid": "443885d7-f7e6-49cb-8632-43fe760f2e7a", 00:19:23.664 "is_configured": false, 00:19:23.664 "data_offset": 0, 00:19:23.665 "data_size": 65536 00:19:23.665 }, 00:19:23.665 { 00:19:23.665 "name": "BaseBdev3", 00:19:23.665 "uuid": "3045d446-c78f-4882-9039-a4fa948c4b2f", 00:19:23.665 "is_configured": true, 00:19:23.665 "data_offset": 0, 00:19:23.665 "data_size": 65536 00:19:23.665 }, 00:19:23.665 { 00:19:23.665 "name": "BaseBdev4", 00:19:23.665 "uuid": "666d8993-5390-491d-9982-b866387a3ef8", 00:19:23.665 "is_configured": true, 00:19:23.665 "data_offset": 0, 00:19:23.665 "data_size": 65536 00:19:23.665 } 00:19:23.665 ] 00:19:23.665 }' 00:19:23.665 18:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:23.665 18:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.234 18:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.234 18:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:24.493 18:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:24.493 18:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:24.751 [2024-07-15 18:34:10.108080] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:24.752 18:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:24.752 18:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:24.752 18:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:24.752 18:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:24.752 18:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:24.752 18:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:24.752 18:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:24.752 18:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:24.752 18:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:24.752 18:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:24.752 18:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.752 18:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:25.011 18:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:25.011 "name": "Existed_Raid", 00:19:25.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.011 "strip_size_kb": 64, 00:19:25.011 "state": "configuring", 00:19:25.011 "raid_level": "concat", 00:19:25.011 "superblock": false, 00:19:25.011 "num_base_bdevs": 4, 00:19:25.011 "num_base_bdevs_discovered": 3, 00:19:25.011 "num_base_bdevs_operational": 4, 00:19:25.011 "base_bdevs_list": [ 00:19:25.011 { 00:19:25.011 "name": null, 00:19:25.011 "uuid": "9e2a1ce4-89c7-49e5-847e-d83384f44aad", 00:19:25.011 "is_configured": false, 00:19:25.011 "data_offset": 0, 00:19:25.011 "data_size": 65536 00:19:25.011 }, 00:19:25.011 { 00:19:25.011 "name": "BaseBdev2", 00:19:25.011 "uuid": "443885d7-f7e6-49cb-8632-43fe760f2e7a", 00:19:25.011 "is_configured": true, 00:19:25.011 "data_offset": 0, 00:19:25.011 "data_size": 65536 00:19:25.011 }, 00:19:25.011 { 00:19:25.011 "name": "BaseBdev3", 00:19:25.011 "uuid": "3045d446-c78f-4882-9039-a4fa948c4b2f", 00:19:25.011 "is_configured": true, 00:19:25.011 "data_offset": 0, 00:19:25.011 "data_size": 65536 00:19:25.011 }, 00:19:25.011 { 00:19:25.011 "name": "BaseBdev4", 00:19:25.011 "uuid": "666d8993-5390-491d-9982-b866387a3ef8", 00:19:25.011 "is_configured": true, 00:19:25.011 "data_offset": 0, 00:19:25.011 "data_size": 65536 00:19:25.011 } 00:19:25.011 ] 00:19:25.011 }' 00:19:25.011 18:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:25.011 18:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.585 18:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.585 18:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:25.844 18:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:25.844 18:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.844 18:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:26.103 18:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 9e2a1ce4-89c7-49e5-847e-d83384f44aad 00:19:26.361 [2024-07-15 18:34:11.775850] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:26.361 [2024-07-15 18:34:11.775888] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2158880 00:19:26.361 [2024-07-15 18:34:11.775895] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:19:26.361 [2024-07-15 18:34:11.776101] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x215fb70 00:19:26.361 [2024-07-15 18:34:11.776224] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2158880 00:19:26.361 [2024-07-15 18:34:11.776232] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2158880 00:19:26.361 [2024-07-15 18:34:11.776396] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:26.361 NewBaseBdev 00:19:26.361 18:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:26.361 18:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:19:26.361 18:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:26.361 18:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:26.361 18:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:26.361 18:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:26.361 18:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:26.621 18:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:26.880 [ 00:19:26.880 { 00:19:26.880 "name": "NewBaseBdev", 00:19:26.880 "aliases": [ 00:19:26.880 "9e2a1ce4-89c7-49e5-847e-d83384f44aad" 00:19:26.880 ], 00:19:26.880 "product_name": "Malloc disk", 00:19:26.880 "block_size": 512, 00:19:26.880 "num_blocks": 65536, 00:19:26.880 "uuid": "9e2a1ce4-89c7-49e5-847e-d83384f44aad", 00:19:26.880 "assigned_rate_limits": { 00:19:26.880 "rw_ios_per_sec": 0, 00:19:26.880 "rw_mbytes_per_sec": 0, 00:19:26.880 "r_mbytes_per_sec": 0, 00:19:26.880 "w_mbytes_per_sec": 0 00:19:26.880 }, 00:19:26.880 "claimed": true, 00:19:26.880 "claim_type": "exclusive_write", 00:19:26.880 "zoned": false, 00:19:26.880 "supported_io_types": { 00:19:26.880 "read": true, 00:19:26.880 "write": true, 00:19:26.880 "unmap": true, 00:19:26.880 "flush": true, 00:19:26.880 "reset": true, 00:19:26.880 "nvme_admin": false, 00:19:26.880 "nvme_io": false, 00:19:26.880 "nvme_io_md": false, 00:19:26.880 "write_zeroes": true, 00:19:26.880 "zcopy": true, 00:19:26.880 "get_zone_info": false, 00:19:26.880 "zone_management": false, 00:19:26.880 "zone_append": false, 00:19:26.880 "compare": false, 00:19:26.880 "compare_and_write": false, 00:19:26.880 "abort": true, 00:19:26.880 "seek_hole": false, 00:19:26.880 "seek_data": false, 00:19:26.880 "copy": true, 00:19:26.880 "nvme_iov_md": false 00:19:26.880 }, 00:19:26.880 "memory_domains": [ 00:19:26.880 { 00:19:26.880 "dma_device_id": "system", 00:19:26.880 "dma_device_type": 1 00:19:26.880 }, 00:19:26.880 { 00:19:26.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:26.880 "dma_device_type": 2 00:19:26.880 } 00:19:26.880 ], 00:19:26.880 "driver_specific": {} 00:19:26.880 } 00:19:26.880 ] 00:19:26.880 18:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:26.880 18:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:26.880 18:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:26.880 18:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:26.880 18:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:26.880 18:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:26.880 18:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:26.880 18:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:26.880 18:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:26.880 18:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:26.880 18:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:26.880 18:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.880 18:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:27.139 18:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:27.139 "name": "Existed_Raid", 00:19:27.139 "uuid": "f6b9550d-b633-4d8d-80f7-1494d342b8ab", 00:19:27.139 "strip_size_kb": 64, 00:19:27.139 "state": "online", 00:19:27.139 "raid_level": "concat", 00:19:27.139 "superblock": false, 00:19:27.139 "num_base_bdevs": 4, 00:19:27.139 "num_base_bdevs_discovered": 4, 00:19:27.139 "num_base_bdevs_operational": 4, 00:19:27.139 "base_bdevs_list": [ 00:19:27.139 { 00:19:27.139 "name": "NewBaseBdev", 00:19:27.139 "uuid": "9e2a1ce4-89c7-49e5-847e-d83384f44aad", 00:19:27.139 "is_configured": true, 00:19:27.139 "data_offset": 0, 00:19:27.139 "data_size": 65536 00:19:27.139 }, 00:19:27.139 { 00:19:27.139 "name": "BaseBdev2", 00:19:27.139 "uuid": "443885d7-f7e6-49cb-8632-43fe760f2e7a", 00:19:27.139 "is_configured": true, 00:19:27.139 "data_offset": 0, 00:19:27.139 "data_size": 65536 00:19:27.139 }, 00:19:27.139 { 00:19:27.139 "name": "BaseBdev3", 00:19:27.139 "uuid": "3045d446-c78f-4882-9039-a4fa948c4b2f", 00:19:27.139 "is_configured": true, 00:19:27.139 "data_offset": 0, 00:19:27.139 "data_size": 65536 00:19:27.139 }, 00:19:27.139 { 00:19:27.139 "name": "BaseBdev4", 00:19:27.139 "uuid": "666d8993-5390-491d-9982-b866387a3ef8", 00:19:27.139 "is_configured": true, 00:19:27.139 "data_offset": 0, 00:19:27.139 "data_size": 65536 00:19:27.139 } 00:19:27.139 ] 00:19:27.139 }' 00:19:27.139 18:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:27.139 18:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.706 18:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:27.706 18:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:27.706 18:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:27.706 18:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:27.706 18:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:27.706 18:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:27.706 18:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:27.706 18:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:27.965 [2024-07-15 18:34:13.420658] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:27.965 18:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:27.965 "name": "Existed_Raid", 00:19:27.965 "aliases": [ 00:19:27.965 "f6b9550d-b633-4d8d-80f7-1494d342b8ab" 00:19:27.965 ], 00:19:27.965 "product_name": "Raid Volume", 00:19:27.965 "block_size": 512, 00:19:27.965 "num_blocks": 262144, 00:19:27.965 "uuid": "f6b9550d-b633-4d8d-80f7-1494d342b8ab", 00:19:27.965 "assigned_rate_limits": { 00:19:27.965 "rw_ios_per_sec": 0, 00:19:27.965 "rw_mbytes_per_sec": 0, 00:19:27.965 "r_mbytes_per_sec": 0, 00:19:27.965 "w_mbytes_per_sec": 0 00:19:27.965 }, 00:19:27.965 "claimed": false, 00:19:27.965 "zoned": false, 00:19:27.965 "supported_io_types": { 00:19:27.965 "read": true, 00:19:27.965 "write": true, 00:19:27.965 "unmap": true, 00:19:27.965 "flush": true, 00:19:27.965 "reset": true, 00:19:27.965 "nvme_admin": false, 00:19:27.965 "nvme_io": false, 00:19:27.965 "nvme_io_md": false, 00:19:27.965 "write_zeroes": true, 00:19:27.965 "zcopy": false, 00:19:27.965 "get_zone_info": false, 00:19:27.965 "zone_management": false, 00:19:27.965 "zone_append": false, 00:19:27.965 "compare": false, 00:19:27.965 "compare_and_write": false, 00:19:27.965 "abort": false, 00:19:27.965 "seek_hole": false, 00:19:27.965 "seek_data": false, 00:19:27.965 "copy": false, 00:19:27.965 "nvme_iov_md": false 00:19:27.965 }, 00:19:27.965 "memory_domains": [ 00:19:27.965 { 00:19:27.965 "dma_device_id": "system", 00:19:27.965 "dma_device_type": 1 00:19:27.965 }, 00:19:27.965 { 00:19:27.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.965 "dma_device_type": 2 00:19:27.965 }, 00:19:27.965 { 00:19:27.965 "dma_device_id": "system", 00:19:27.965 "dma_device_type": 1 00:19:27.965 }, 00:19:27.965 { 00:19:27.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.965 "dma_device_type": 2 00:19:27.965 }, 00:19:27.965 { 00:19:27.965 "dma_device_id": "system", 00:19:27.965 "dma_device_type": 1 00:19:27.965 }, 00:19:27.965 { 00:19:27.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.965 "dma_device_type": 2 00:19:27.965 }, 00:19:27.965 { 00:19:27.965 "dma_device_id": "system", 00:19:27.965 "dma_device_type": 1 00:19:27.965 }, 00:19:27.965 { 00:19:27.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.965 "dma_device_type": 2 00:19:27.965 } 00:19:27.965 ], 00:19:27.965 "driver_specific": { 00:19:27.965 "raid": { 00:19:27.965 "uuid": "f6b9550d-b633-4d8d-80f7-1494d342b8ab", 00:19:27.965 "strip_size_kb": 64, 00:19:27.965 "state": "online", 00:19:27.965 "raid_level": "concat", 00:19:27.965 "superblock": false, 00:19:27.965 "num_base_bdevs": 4, 00:19:27.965 "num_base_bdevs_discovered": 4, 00:19:27.965 "num_base_bdevs_operational": 4, 00:19:27.965 "base_bdevs_list": [ 00:19:27.965 { 00:19:27.965 "name": "NewBaseBdev", 00:19:27.965 "uuid": "9e2a1ce4-89c7-49e5-847e-d83384f44aad", 00:19:27.965 "is_configured": true, 00:19:27.965 "data_offset": 0, 00:19:27.965 "data_size": 65536 00:19:27.965 }, 00:19:27.965 { 00:19:27.965 "name": "BaseBdev2", 00:19:27.965 "uuid": "443885d7-f7e6-49cb-8632-43fe760f2e7a", 00:19:27.965 "is_configured": true, 00:19:27.965 "data_offset": 0, 00:19:27.965 "data_size": 65536 00:19:27.965 }, 00:19:27.965 { 00:19:27.965 "name": "BaseBdev3", 00:19:27.965 "uuid": "3045d446-c78f-4882-9039-a4fa948c4b2f", 00:19:27.965 "is_configured": true, 00:19:27.965 "data_offset": 0, 00:19:27.965 "data_size": 65536 00:19:27.965 }, 00:19:27.965 { 00:19:27.965 "name": "BaseBdev4", 00:19:27.965 "uuid": "666d8993-5390-491d-9982-b866387a3ef8", 00:19:27.965 "is_configured": true, 00:19:27.965 "data_offset": 0, 00:19:27.965 "data_size": 65536 00:19:27.965 } 00:19:27.965 ] 00:19:27.965 } 00:19:27.965 } 00:19:27.965 }' 00:19:27.965 18:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:27.965 18:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:27.965 BaseBdev2 00:19:27.965 BaseBdev3 00:19:27.965 BaseBdev4' 00:19:27.965 18:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:27.965 18:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:27.965 18:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:28.223 18:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:28.223 "name": "NewBaseBdev", 00:19:28.223 "aliases": [ 00:19:28.223 "9e2a1ce4-89c7-49e5-847e-d83384f44aad" 00:19:28.223 ], 00:19:28.223 "product_name": "Malloc disk", 00:19:28.223 "block_size": 512, 00:19:28.223 "num_blocks": 65536, 00:19:28.223 "uuid": "9e2a1ce4-89c7-49e5-847e-d83384f44aad", 00:19:28.223 "assigned_rate_limits": { 00:19:28.223 "rw_ios_per_sec": 0, 00:19:28.223 "rw_mbytes_per_sec": 0, 00:19:28.223 "r_mbytes_per_sec": 0, 00:19:28.223 "w_mbytes_per_sec": 0 00:19:28.223 }, 00:19:28.223 "claimed": true, 00:19:28.223 "claim_type": "exclusive_write", 00:19:28.223 "zoned": false, 00:19:28.223 "supported_io_types": { 00:19:28.223 "read": true, 00:19:28.223 "write": true, 00:19:28.223 "unmap": true, 00:19:28.223 "flush": true, 00:19:28.223 "reset": true, 00:19:28.223 "nvme_admin": false, 00:19:28.223 "nvme_io": false, 00:19:28.223 "nvme_io_md": false, 00:19:28.223 "write_zeroes": true, 00:19:28.223 "zcopy": true, 00:19:28.223 "get_zone_info": false, 00:19:28.223 "zone_management": false, 00:19:28.223 "zone_append": false, 00:19:28.223 "compare": false, 00:19:28.223 "compare_and_write": false, 00:19:28.223 "abort": true, 00:19:28.223 "seek_hole": false, 00:19:28.223 "seek_data": false, 00:19:28.223 "copy": true, 00:19:28.223 "nvme_iov_md": false 00:19:28.223 }, 00:19:28.223 "memory_domains": [ 00:19:28.223 { 00:19:28.223 "dma_device_id": "system", 00:19:28.223 "dma_device_type": 1 00:19:28.223 }, 00:19:28.223 { 00:19:28.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.223 "dma_device_type": 2 00:19:28.223 } 00:19:28.223 ], 00:19:28.223 "driver_specific": {} 00:19:28.223 }' 00:19:28.223 18:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:28.482 18:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:28.482 18:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:28.482 18:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:28.482 18:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:28.482 18:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:28.482 18:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:28.482 18:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:28.772 18:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:28.772 18:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:28.772 18:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:28.772 18:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:28.772 18:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:28.772 18:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:28.772 18:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:29.030 18:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:29.030 "name": "BaseBdev2", 00:19:29.030 "aliases": [ 00:19:29.030 "443885d7-f7e6-49cb-8632-43fe760f2e7a" 00:19:29.030 ], 00:19:29.030 "product_name": "Malloc disk", 00:19:29.030 "block_size": 512, 00:19:29.030 "num_blocks": 65536, 00:19:29.030 "uuid": "443885d7-f7e6-49cb-8632-43fe760f2e7a", 00:19:29.030 "assigned_rate_limits": { 00:19:29.030 "rw_ios_per_sec": 0, 00:19:29.030 "rw_mbytes_per_sec": 0, 00:19:29.030 "r_mbytes_per_sec": 0, 00:19:29.030 "w_mbytes_per_sec": 0 00:19:29.030 }, 00:19:29.030 "claimed": true, 00:19:29.030 "claim_type": "exclusive_write", 00:19:29.030 "zoned": false, 00:19:29.030 "supported_io_types": { 00:19:29.030 "read": true, 00:19:29.030 "write": true, 00:19:29.030 "unmap": true, 00:19:29.031 "flush": true, 00:19:29.031 "reset": true, 00:19:29.031 "nvme_admin": false, 00:19:29.031 "nvme_io": false, 00:19:29.031 "nvme_io_md": false, 00:19:29.031 "write_zeroes": true, 00:19:29.031 "zcopy": true, 00:19:29.031 "get_zone_info": false, 00:19:29.031 "zone_management": false, 00:19:29.031 "zone_append": false, 00:19:29.031 "compare": false, 00:19:29.031 "compare_and_write": false, 00:19:29.031 "abort": true, 00:19:29.031 "seek_hole": false, 00:19:29.031 "seek_data": false, 00:19:29.031 "copy": true, 00:19:29.031 "nvme_iov_md": false 00:19:29.031 }, 00:19:29.031 "memory_domains": [ 00:19:29.031 { 00:19:29.031 "dma_device_id": "system", 00:19:29.031 "dma_device_type": 1 00:19:29.031 }, 00:19:29.031 { 00:19:29.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:29.031 "dma_device_type": 2 00:19:29.031 } 00:19:29.031 ], 00:19:29.031 "driver_specific": {} 00:19:29.031 }' 00:19:29.031 18:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:29.031 18:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:29.031 18:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:29.031 18:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:29.031 18:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:29.289 18:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:29.289 18:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:29.289 18:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:29.289 18:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:29.289 18:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:29.289 18:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:29.289 18:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:29.289 18:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:29.289 18:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:29.289 18:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:29.547 18:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:29.547 "name": "BaseBdev3", 00:19:29.547 "aliases": [ 00:19:29.547 "3045d446-c78f-4882-9039-a4fa948c4b2f" 00:19:29.547 ], 00:19:29.547 "product_name": "Malloc disk", 00:19:29.547 "block_size": 512, 00:19:29.547 "num_blocks": 65536, 00:19:29.547 "uuid": "3045d446-c78f-4882-9039-a4fa948c4b2f", 00:19:29.547 "assigned_rate_limits": { 00:19:29.547 "rw_ios_per_sec": 0, 00:19:29.547 "rw_mbytes_per_sec": 0, 00:19:29.547 "r_mbytes_per_sec": 0, 00:19:29.547 "w_mbytes_per_sec": 0 00:19:29.547 }, 00:19:29.547 "claimed": true, 00:19:29.547 "claim_type": "exclusive_write", 00:19:29.547 "zoned": false, 00:19:29.547 "supported_io_types": { 00:19:29.547 "read": true, 00:19:29.547 "write": true, 00:19:29.547 "unmap": true, 00:19:29.547 "flush": true, 00:19:29.548 "reset": true, 00:19:29.548 "nvme_admin": false, 00:19:29.548 "nvme_io": false, 00:19:29.548 "nvme_io_md": false, 00:19:29.548 "write_zeroes": true, 00:19:29.548 "zcopy": true, 00:19:29.548 "get_zone_info": false, 00:19:29.548 "zone_management": false, 00:19:29.548 "zone_append": false, 00:19:29.548 "compare": false, 00:19:29.548 "compare_and_write": false, 00:19:29.548 "abort": true, 00:19:29.548 "seek_hole": false, 00:19:29.548 "seek_data": false, 00:19:29.548 "copy": true, 00:19:29.548 "nvme_iov_md": false 00:19:29.548 }, 00:19:29.548 "memory_domains": [ 00:19:29.548 { 00:19:29.548 "dma_device_id": "system", 00:19:29.548 "dma_device_type": 1 00:19:29.548 }, 00:19:29.548 { 00:19:29.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:29.548 "dma_device_type": 2 00:19:29.548 } 00:19:29.548 ], 00:19:29.548 "driver_specific": {} 00:19:29.548 }' 00:19:29.548 18:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:29.806 18:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:29.806 18:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:29.807 18:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:29.807 18:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:29.807 18:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:29.807 18:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:29.807 18:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:29.807 18:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:29.807 18:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:30.065 18:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:30.065 18:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:30.065 18:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:30.065 18:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:19:30.065 18:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:30.324 18:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:30.324 "name": "BaseBdev4", 00:19:30.324 "aliases": [ 00:19:30.324 "666d8993-5390-491d-9982-b866387a3ef8" 00:19:30.324 ], 00:19:30.324 "product_name": "Malloc disk", 00:19:30.324 "block_size": 512, 00:19:30.324 "num_blocks": 65536, 00:19:30.324 "uuid": "666d8993-5390-491d-9982-b866387a3ef8", 00:19:30.324 "assigned_rate_limits": { 00:19:30.324 "rw_ios_per_sec": 0, 00:19:30.324 "rw_mbytes_per_sec": 0, 00:19:30.324 "r_mbytes_per_sec": 0, 00:19:30.324 "w_mbytes_per_sec": 0 00:19:30.324 }, 00:19:30.324 "claimed": true, 00:19:30.324 "claim_type": "exclusive_write", 00:19:30.324 "zoned": false, 00:19:30.324 "supported_io_types": { 00:19:30.324 "read": true, 00:19:30.324 "write": true, 00:19:30.324 "unmap": true, 00:19:30.324 "flush": true, 00:19:30.324 "reset": true, 00:19:30.324 "nvme_admin": false, 00:19:30.324 "nvme_io": false, 00:19:30.324 "nvme_io_md": false, 00:19:30.324 "write_zeroes": true, 00:19:30.324 "zcopy": true, 00:19:30.324 "get_zone_info": false, 00:19:30.324 "zone_management": false, 00:19:30.324 "zone_append": false, 00:19:30.324 "compare": false, 00:19:30.324 "compare_and_write": false, 00:19:30.324 "abort": true, 00:19:30.324 "seek_hole": false, 00:19:30.324 "seek_data": false, 00:19:30.324 "copy": true, 00:19:30.324 "nvme_iov_md": false 00:19:30.324 }, 00:19:30.324 "memory_domains": [ 00:19:30.324 { 00:19:30.324 "dma_device_id": "system", 00:19:30.324 "dma_device_type": 1 00:19:30.324 }, 00:19:30.324 { 00:19:30.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.324 "dma_device_type": 2 00:19:30.324 } 00:19:30.324 ], 00:19:30.324 "driver_specific": {} 00:19:30.324 }' 00:19:30.324 18:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:30.324 18:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:30.324 18:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:30.324 18:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:30.324 18:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:30.324 18:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:30.324 18:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:30.582 18:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:30.582 18:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:30.582 18:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:30.582 18:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:30.582 18:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:30.582 18:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:30.841 [2024-07-15 18:34:16.296074] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:30.841 [2024-07-15 18:34:16.296101] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:30.841 [2024-07-15 18:34:16.296151] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:30.841 [2024-07-15 18:34:16.296211] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:30.841 [2024-07-15 18:34:16.296220] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2158880 name Existed_Raid, state offline 00:19:30.841 18:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2850010 00:19:30.841 18:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2850010 ']' 00:19:30.841 18:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2850010 00:19:30.841 18:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:19:30.841 18:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:30.841 18:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2850010 00:19:30.841 18:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:30.841 18:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:30.841 18:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2850010' 00:19:30.841 killing process with pid 2850010 00:19:30.841 18:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2850010 00:19:30.841 [2024-07-15 18:34:16.361560] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:30.841 18:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2850010 00:19:31.101 [2024-07-15 18:34:16.397037] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:19:31.101 00:19:31.101 real 0m36.194s 00:19:31.101 user 1m8.673s 00:19:31.101 sys 0m4.834s 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.101 ************************************ 00:19:31.101 END TEST raid_state_function_test 00:19:31.101 ************************************ 00:19:31.101 18:34:16 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:31.101 18:34:16 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:19:31.101 18:34:16 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:31.101 18:34:16 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:31.101 18:34:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:31.101 ************************************ 00:19:31.101 START TEST raid_state_function_test_sb 00:19:31.101 ************************************ 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 true 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2856249 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2856249' 00:19:31.101 Process raid pid: 2856249 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2856249 /var/tmp/spdk-raid.sock 00:19:31.101 18:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2856249 ']' 00:19:31.360 18:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:31.360 18:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:31.360 18:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:31.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:31.360 18:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:31.360 18:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.360 [2024-07-15 18:34:16.706731] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:19:31.360 [2024-07-15 18:34:16.706792] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.360 [2024-07-15 18:34:16.808066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.360 [2024-07-15 18:34:16.904009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.619 [2024-07-15 18:34:16.967480] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:31.619 [2024-07-15 18:34:16.967510] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:32.187 18:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:32.187 18:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:19:32.187 18:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:32.446 [2024-07-15 18:34:17.895746] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:32.446 [2024-07-15 18:34:17.895785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:32.447 [2024-07-15 18:34:17.895794] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:32.447 [2024-07-15 18:34:17.895803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:32.447 [2024-07-15 18:34:17.895809] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:32.447 [2024-07-15 18:34:17.895817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:32.447 [2024-07-15 18:34:17.895824] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:32.447 [2024-07-15 18:34:17.895832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:32.447 18:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:32.447 18:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:32.447 18:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:32.447 18:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:32.447 18:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:32.447 18:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:32.447 18:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:32.447 18:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:32.447 18:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:32.447 18:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:32.447 18:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.447 18:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.707 18:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:32.707 "name": "Existed_Raid", 00:19:32.707 "uuid": "8b815528-79d9-48bb-83d1-3892844b1861", 00:19:32.707 "strip_size_kb": 64, 00:19:32.707 "state": "configuring", 00:19:32.707 "raid_level": "concat", 00:19:32.707 "superblock": true, 00:19:32.707 "num_base_bdevs": 4, 00:19:32.707 "num_base_bdevs_discovered": 0, 00:19:32.707 "num_base_bdevs_operational": 4, 00:19:32.707 "base_bdevs_list": [ 00:19:32.707 { 00:19:32.707 "name": "BaseBdev1", 00:19:32.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.707 "is_configured": false, 00:19:32.707 "data_offset": 0, 00:19:32.707 "data_size": 0 00:19:32.707 }, 00:19:32.707 { 00:19:32.707 "name": "BaseBdev2", 00:19:32.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.707 "is_configured": false, 00:19:32.707 "data_offset": 0, 00:19:32.707 "data_size": 0 00:19:32.707 }, 00:19:32.707 { 00:19:32.707 "name": "BaseBdev3", 00:19:32.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.707 "is_configured": false, 00:19:32.707 "data_offset": 0, 00:19:32.707 "data_size": 0 00:19:32.707 }, 00:19:32.707 { 00:19:32.707 "name": "BaseBdev4", 00:19:32.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.707 "is_configured": false, 00:19:32.707 "data_offset": 0, 00:19:32.707 "data_size": 0 00:19:32.707 } 00:19:32.707 ] 00:19:32.707 }' 00:19:32.707 18:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:32.707 18:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.276 18:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:33.535 [2024-07-15 18:34:19.038656] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:33.535 [2024-07-15 18:34:19.038685] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x142fbc0 name Existed_Raid, state configuring 00:19:33.535 18:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:33.794 [2024-07-15 18:34:19.295374] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:33.794 [2024-07-15 18:34:19.295401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:33.794 [2024-07-15 18:34:19.295409] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:33.794 [2024-07-15 18:34:19.295417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:33.794 [2024-07-15 18:34:19.295424] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:33.794 [2024-07-15 18:34:19.295432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:33.794 [2024-07-15 18:34:19.295438] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:33.794 [2024-07-15 18:34:19.295446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:33.794 18:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:34.053 [2024-07-15 18:34:19.557546] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:34.053 BaseBdev1 00:19:34.053 18:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:34.053 18:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:34.053 18:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:34.053 18:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:34.053 18:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:34.053 18:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:34.053 18:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:34.312 18:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:34.570 [ 00:19:34.570 { 00:19:34.570 "name": "BaseBdev1", 00:19:34.570 "aliases": [ 00:19:34.570 "334fa9b6-7890-4152-8e55-9653e5d51e0d" 00:19:34.570 ], 00:19:34.570 "product_name": "Malloc disk", 00:19:34.570 "block_size": 512, 00:19:34.570 "num_blocks": 65536, 00:19:34.570 "uuid": "334fa9b6-7890-4152-8e55-9653e5d51e0d", 00:19:34.570 "assigned_rate_limits": { 00:19:34.570 "rw_ios_per_sec": 0, 00:19:34.570 "rw_mbytes_per_sec": 0, 00:19:34.570 "r_mbytes_per_sec": 0, 00:19:34.570 "w_mbytes_per_sec": 0 00:19:34.570 }, 00:19:34.570 "claimed": true, 00:19:34.570 "claim_type": "exclusive_write", 00:19:34.570 "zoned": false, 00:19:34.570 "supported_io_types": { 00:19:34.570 "read": true, 00:19:34.570 "write": true, 00:19:34.570 "unmap": true, 00:19:34.570 "flush": true, 00:19:34.570 "reset": true, 00:19:34.570 "nvme_admin": false, 00:19:34.570 "nvme_io": false, 00:19:34.570 "nvme_io_md": false, 00:19:34.570 "write_zeroes": true, 00:19:34.570 "zcopy": true, 00:19:34.570 "get_zone_info": false, 00:19:34.570 "zone_management": false, 00:19:34.570 "zone_append": false, 00:19:34.570 "compare": false, 00:19:34.571 "compare_and_write": false, 00:19:34.571 "abort": true, 00:19:34.571 "seek_hole": false, 00:19:34.571 "seek_data": false, 00:19:34.571 "copy": true, 00:19:34.571 "nvme_iov_md": false 00:19:34.571 }, 00:19:34.571 "memory_domains": [ 00:19:34.571 { 00:19:34.571 "dma_device_id": "system", 00:19:34.571 "dma_device_type": 1 00:19:34.571 }, 00:19:34.571 { 00:19:34.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.571 "dma_device_type": 2 00:19:34.571 } 00:19:34.571 ], 00:19:34.571 "driver_specific": {} 00:19:34.571 } 00:19:34.571 ] 00:19:34.571 18:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:34.571 18:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:34.571 18:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:34.571 18:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:34.571 18:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:34.571 18:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:34.571 18:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:34.571 18:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:34.571 18:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:34.571 18:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:34.571 18:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:34.571 18:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.571 18:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:34.830 18:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:34.830 "name": "Existed_Raid", 00:19:34.830 "uuid": "604f9506-0002-48bb-a851-c8c22ee50bc9", 00:19:34.830 "strip_size_kb": 64, 00:19:34.830 "state": "configuring", 00:19:34.830 "raid_level": "concat", 00:19:34.830 "superblock": true, 00:19:34.830 "num_base_bdevs": 4, 00:19:34.830 "num_base_bdevs_discovered": 1, 00:19:34.830 "num_base_bdevs_operational": 4, 00:19:34.830 "base_bdevs_list": [ 00:19:34.830 { 00:19:34.830 "name": "BaseBdev1", 00:19:34.830 "uuid": "334fa9b6-7890-4152-8e55-9653e5d51e0d", 00:19:34.830 "is_configured": true, 00:19:34.830 "data_offset": 2048, 00:19:34.830 "data_size": 63488 00:19:34.830 }, 00:19:34.830 { 00:19:34.830 "name": "BaseBdev2", 00:19:34.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.830 "is_configured": false, 00:19:34.830 "data_offset": 0, 00:19:34.830 "data_size": 0 00:19:34.830 }, 00:19:34.830 { 00:19:34.830 "name": "BaseBdev3", 00:19:34.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.830 "is_configured": false, 00:19:34.830 "data_offset": 0, 00:19:34.830 "data_size": 0 00:19:34.830 }, 00:19:34.830 { 00:19:34.830 "name": "BaseBdev4", 00:19:34.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.830 "is_configured": false, 00:19:34.830 "data_offset": 0, 00:19:34.830 "data_size": 0 00:19:34.830 } 00:19:34.830 ] 00:19:34.830 }' 00:19:34.830 18:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:34.830 18:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.767 18:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:35.767 [2024-07-15 18:34:21.254109] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:35.767 [2024-07-15 18:34:21.254151] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x142f430 name Existed_Raid, state configuring 00:19:35.767 18:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:36.027 [2024-07-15 18:34:21.506832] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:36.027 [2024-07-15 18:34:21.508350] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:36.027 [2024-07-15 18:34:21.508380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:36.027 [2024-07-15 18:34:21.508389] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:36.027 [2024-07-15 18:34:21.508397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:36.027 [2024-07-15 18:34:21.508403] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:36.027 [2024-07-15 18:34:21.508411] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:36.027 18:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:36.027 18:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:36.027 18:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:36.027 18:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:36.027 18:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:36.027 18:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:36.027 18:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:36.027 18:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:36.027 18:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:36.027 18:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:36.027 18:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:36.027 18:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:36.027 18:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.027 18:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:36.286 18:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:36.286 "name": "Existed_Raid", 00:19:36.286 "uuid": "7d34456a-19ec-4f73-8c4e-af8c67b30eee", 00:19:36.286 "strip_size_kb": 64, 00:19:36.286 "state": "configuring", 00:19:36.286 "raid_level": "concat", 00:19:36.286 "superblock": true, 00:19:36.286 "num_base_bdevs": 4, 00:19:36.286 "num_base_bdevs_discovered": 1, 00:19:36.286 "num_base_bdevs_operational": 4, 00:19:36.286 "base_bdevs_list": [ 00:19:36.286 { 00:19:36.286 "name": "BaseBdev1", 00:19:36.286 "uuid": "334fa9b6-7890-4152-8e55-9653e5d51e0d", 00:19:36.286 "is_configured": true, 00:19:36.286 "data_offset": 2048, 00:19:36.286 "data_size": 63488 00:19:36.286 }, 00:19:36.286 { 00:19:36.286 "name": "BaseBdev2", 00:19:36.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.286 "is_configured": false, 00:19:36.286 "data_offset": 0, 00:19:36.286 "data_size": 0 00:19:36.286 }, 00:19:36.286 { 00:19:36.286 "name": "BaseBdev3", 00:19:36.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.286 "is_configured": false, 00:19:36.286 "data_offset": 0, 00:19:36.286 "data_size": 0 00:19:36.286 }, 00:19:36.286 { 00:19:36.286 "name": "BaseBdev4", 00:19:36.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.286 "is_configured": false, 00:19:36.286 "data_offset": 0, 00:19:36.286 "data_size": 0 00:19:36.286 } 00:19:36.286 ] 00:19:36.286 }' 00:19:36.286 18:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:36.286 18:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.222 18:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:37.222 [2024-07-15 18:34:22.689155] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:37.222 BaseBdev2 00:19:37.222 18:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:37.222 18:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:37.222 18:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:37.222 18:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:37.222 18:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:37.222 18:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:37.222 18:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:37.481 18:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:37.740 [ 00:19:37.740 { 00:19:37.740 "name": "BaseBdev2", 00:19:37.740 "aliases": [ 00:19:37.740 "d1412c78-01ab-4d96-ba32-e215a868785f" 00:19:37.740 ], 00:19:37.740 "product_name": "Malloc disk", 00:19:37.740 "block_size": 512, 00:19:37.740 "num_blocks": 65536, 00:19:37.740 "uuid": "d1412c78-01ab-4d96-ba32-e215a868785f", 00:19:37.740 "assigned_rate_limits": { 00:19:37.740 "rw_ios_per_sec": 0, 00:19:37.740 "rw_mbytes_per_sec": 0, 00:19:37.740 "r_mbytes_per_sec": 0, 00:19:37.740 "w_mbytes_per_sec": 0 00:19:37.740 }, 00:19:37.740 "claimed": true, 00:19:37.740 "claim_type": "exclusive_write", 00:19:37.740 "zoned": false, 00:19:37.740 "supported_io_types": { 00:19:37.740 "read": true, 00:19:37.740 "write": true, 00:19:37.740 "unmap": true, 00:19:37.740 "flush": true, 00:19:37.740 "reset": true, 00:19:37.740 "nvme_admin": false, 00:19:37.740 "nvme_io": false, 00:19:37.740 "nvme_io_md": false, 00:19:37.740 "write_zeroes": true, 00:19:37.740 "zcopy": true, 00:19:37.740 "get_zone_info": false, 00:19:37.740 "zone_management": false, 00:19:37.740 "zone_append": false, 00:19:37.740 "compare": false, 00:19:37.740 "compare_and_write": false, 00:19:37.740 "abort": true, 00:19:37.740 "seek_hole": false, 00:19:37.740 "seek_data": false, 00:19:37.740 "copy": true, 00:19:37.740 "nvme_iov_md": false 00:19:37.740 }, 00:19:37.740 "memory_domains": [ 00:19:37.740 { 00:19:37.740 "dma_device_id": "system", 00:19:37.740 "dma_device_type": 1 00:19:37.740 }, 00:19:37.740 { 00:19:37.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.740 "dma_device_type": 2 00:19:37.740 } 00:19:37.740 ], 00:19:37.740 "driver_specific": {} 00:19:37.740 } 00:19:37.740 ] 00:19:37.740 18:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:37.740 18:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:37.740 18:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:37.740 18:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:37.740 18:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:37.740 18:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:37.740 18:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:37.740 18:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:37.740 18:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:37.740 18:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:37.740 18:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:37.740 18:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:37.740 18:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:37.740 18:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.740 18:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:37.999 18:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:37.999 "name": "Existed_Raid", 00:19:37.999 "uuid": "7d34456a-19ec-4f73-8c4e-af8c67b30eee", 00:19:37.999 "strip_size_kb": 64, 00:19:37.999 "state": "configuring", 00:19:37.999 "raid_level": "concat", 00:19:37.999 "superblock": true, 00:19:37.999 "num_base_bdevs": 4, 00:19:37.999 "num_base_bdevs_discovered": 2, 00:19:37.999 "num_base_bdevs_operational": 4, 00:19:37.999 "base_bdevs_list": [ 00:19:37.999 { 00:19:37.999 "name": "BaseBdev1", 00:19:37.999 "uuid": "334fa9b6-7890-4152-8e55-9653e5d51e0d", 00:19:37.999 "is_configured": true, 00:19:37.999 "data_offset": 2048, 00:19:37.999 "data_size": 63488 00:19:37.999 }, 00:19:37.999 { 00:19:37.999 "name": "BaseBdev2", 00:19:37.999 "uuid": "d1412c78-01ab-4d96-ba32-e215a868785f", 00:19:37.999 "is_configured": true, 00:19:37.999 "data_offset": 2048, 00:19:37.999 "data_size": 63488 00:19:37.999 }, 00:19:37.999 { 00:19:37.999 "name": "BaseBdev3", 00:19:37.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.999 "is_configured": false, 00:19:37.999 "data_offset": 0, 00:19:37.999 "data_size": 0 00:19:37.999 }, 00:19:37.999 { 00:19:37.999 "name": "BaseBdev4", 00:19:38.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.000 "is_configured": false, 00:19:38.000 "data_offset": 0, 00:19:38.000 "data_size": 0 00:19:38.000 } 00:19:38.000 ] 00:19:38.000 }' 00:19:38.000 18:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:38.000 18:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.935 18:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:38.935 [2024-07-15 18:34:24.481301] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:38.935 BaseBdev3 00:19:39.193 18:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:39.193 18:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:39.193 18:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:39.193 18:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:39.193 18:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:39.193 18:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:39.193 18:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:39.452 18:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:39.452 [ 00:19:39.452 { 00:19:39.452 "name": "BaseBdev3", 00:19:39.452 "aliases": [ 00:19:39.452 "0c9bc096-3b79-4d24-a681-e2a1032f6089" 00:19:39.452 ], 00:19:39.452 "product_name": "Malloc disk", 00:19:39.452 "block_size": 512, 00:19:39.452 "num_blocks": 65536, 00:19:39.452 "uuid": "0c9bc096-3b79-4d24-a681-e2a1032f6089", 00:19:39.452 "assigned_rate_limits": { 00:19:39.452 "rw_ios_per_sec": 0, 00:19:39.452 "rw_mbytes_per_sec": 0, 00:19:39.452 "r_mbytes_per_sec": 0, 00:19:39.452 "w_mbytes_per_sec": 0 00:19:39.452 }, 00:19:39.452 "claimed": true, 00:19:39.452 "claim_type": "exclusive_write", 00:19:39.452 "zoned": false, 00:19:39.452 "supported_io_types": { 00:19:39.452 "read": true, 00:19:39.452 "write": true, 00:19:39.452 "unmap": true, 00:19:39.452 "flush": true, 00:19:39.452 "reset": true, 00:19:39.452 "nvme_admin": false, 00:19:39.452 "nvme_io": false, 00:19:39.452 "nvme_io_md": false, 00:19:39.452 "write_zeroes": true, 00:19:39.452 "zcopy": true, 00:19:39.452 "get_zone_info": false, 00:19:39.452 "zone_management": false, 00:19:39.452 "zone_append": false, 00:19:39.452 "compare": false, 00:19:39.452 "compare_and_write": false, 00:19:39.452 "abort": true, 00:19:39.452 "seek_hole": false, 00:19:39.452 "seek_data": false, 00:19:39.452 "copy": true, 00:19:39.452 "nvme_iov_md": false 00:19:39.452 }, 00:19:39.452 "memory_domains": [ 00:19:39.452 { 00:19:39.452 "dma_device_id": "system", 00:19:39.452 "dma_device_type": 1 00:19:39.452 }, 00:19:39.452 { 00:19:39.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:39.452 "dma_device_type": 2 00:19:39.452 } 00:19:39.452 ], 00:19:39.452 "driver_specific": {} 00:19:39.452 } 00:19:39.452 ] 00:19:39.711 18:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:39.711 18:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:39.711 18:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:39.711 18:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:39.711 18:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:39.711 18:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:39.711 18:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:39.711 18:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:39.711 18:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:39.711 18:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:39.711 18:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:39.711 18:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:39.711 18:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:39.711 18:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:39.711 18:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.970 18:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:39.970 "name": "Existed_Raid", 00:19:39.970 "uuid": "7d34456a-19ec-4f73-8c4e-af8c67b30eee", 00:19:39.970 "strip_size_kb": 64, 00:19:39.970 "state": "configuring", 00:19:39.970 "raid_level": "concat", 00:19:39.970 "superblock": true, 00:19:39.970 "num_base_bdevs": 4, 00:19:39.970 "num_base_bdevs_discovered": 3, 00:19:39.970 "num_base_bdevs_operational": 4, 00:19:39.970 "base_bdevs_list": [ 00:19:39.970 { 00:19:39.970 "name": "BaseBdev1", 00:19:39.970 "uuid": "334fa9b6-7890-4152-8e55-9653e5d51e0d", 00:19:39.970 "is_configured": true, 00:19:39.970 "data_offset": 2048, 00:19:39.970 "data_size": 63488 00:19:39.970 }, 00:19:39.970 { 00:19:39.970 "name": "BaseBdev2", 00:19:39.970 "uuid": "d1412c78-01ab-4d96-ba32-e215a868785f", 00:19:39.970 "is_configured": true, 00:19:39.970 "data_offset": 2048, 00:19:39.970 "data_size": 63488 00:19:39.970 }, 00:19:39.970 { 00:19:39.970 "name": "BaseBdev3", 00:19:39.970 "uuid": "0c9bc096-3b79-4d24-a681-e2a1032f6089", 00:19:39.970 "is_configured": true, 00:19:39.970 "data_offset": 2048, 00:19:39.970 "data_size": 63488 00:19:39.970 }, 00:19:39.970 { 00:19:39.970 "name": "BaseBdev4", 00:19:39.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.970 "is_configured": false, 00:19:39.970 "data_offset": 0, 00:19:39.970 "data_size": 0 00:19:39.970 } 00:19:39.970 ] 00:19:39.970 }' 00:19:39.970 18:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:39.970 18:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.904 18:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:41.163 [2024-07-15 18:34:26.494079] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:41.163 [2024-07-15 18:34:26.494263] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1430490 00:19:41.163 [2024-07-15 18:34:26.494277] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:41.163 [2024-07-15 18:34:26.494462] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x141c2d0 00:19:41.163 [2024-07-15 18:34:26.494591] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1430490 00:19:41.163 [2024-07-15 18:34:26.494599] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1430490 00:19:41.163 [2024-07-15 18:34:26.494697] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:41.163 BaseBdev4 00:19:41.163 18:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:19:41.163 18:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:19:41.163 18:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:41.163 18:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:41.163 18:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:41.163 18:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:41.163 18:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:41.421 18:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:41.679 [ 00:19:41.679 { 00:19:41.679 "name": "BaseBdev4", 00:19:41.679 "aliases": [ 00:19:41.679 "5b210592-20d2-43f5-aeda-2cf9221d50c5" 00:19:41.679 ], 00:19:41.679 "product_name": "Malloc disk", 00:19:41.679 "block_size": 512, 00:19:41.679 "num_blocks": 65536, 00:19:41.679 "uuid": "5b210592-20d2-43f5-aeda-2cf9221d50c5", 00:19:41.679 "assigned_rate_limits": { 00:19:41.679 "rw_ios_per_sec": 0, 00:19:41.679 "rw_mbytes_per_sec": 0, 00:19:41.679 "r_mbytes_per_sec": 0, 00:19:41.679 "w_mbytes_per_sec": 0 00:19:41.679 }, 00:19:41.679 "claimed": true, 00:19:41.679 "claim_type": "exclusive_write", 00:19:41.679 "zoned": false, 00:19:41.679 "supported_io_types": { 00:19:41.679 "read": true, 00:19:41.679 "write": true, 00:19:41.679 "unmap": true, 00:19:41.679 "flush": true, 00:19:41.679 "reset": true, 00:19:41.679 "nvme_admin": false, 00:19:41.679 "nvme_io": false, 00:19:41.679 "nvme_io_md": false, 00:19:41.679 "write_zeroes": true, 00:19:41.679 "zcopy": true, 00:19:41.679 "get_zone_info": false, 00:19:41.679 "zone_management": false, 00:19:41.679 "zone_append": false, 00:19:41.679 "compare": false, 00:19:41.679 "compare_and_write": false, 00:19:41.679 "abort": true, 00:19:41.679 "seek_hole": false, 00:19:41.679 "seek_data": false, 00:19:41.679 "copy": true, 00:19:41.679 "nvme_iov_md": false 00:19:41.679 }, 00:19:41.679 "memory_domains": [ 00:19:41.679 { 00:19:41.679 "dma_device_id": "system", 00:19:41.679 "dma_device_type": 1 00:19:41.679 }, 00:19:41.679 { 00:19:41.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:41.679 "dma_device_type": 2 00:19:41.679 } 00:19:41.679 ], 00:19:41.679 "driver_specific": {} 00:19:41.679 } 00:19:41.679 ] 00:19:41.679 18:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:41.679 18:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:41.679 18:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:41.679 18:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:41.679 18:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:41.679 18:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:41.679 18:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:41.679 18:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:41.680 18:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:41.680 18:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:41.680 18:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:41.680 18:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:41.680 18:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:41.680 18:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.680 18:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:41.938 18:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:41.938 "name": "Existed_Raid", 00:19:41.938 "uuid": "7d34456a-19ec-4f73-8c4e-af8c67b30eee", 00:19:41.938 "strip_size_kb": 64, 00:19:41.938 "state": "online", 00:19:41.938 "raid_level": "concat", 00:19:41.938 "superblock": true, 00:19:41.938 "num_base_bdevs": 4, 00:19:41.938 "num_base_bdevs_discovered": 4, 00:19:41.938 "num_base_bdevs_operational": 4, 00:19:41.938 "base_bdevs_list": [ 00:19:41.938 { 00:19:41.938 "name": "BaseBdev1", 00:19:41.938 "uuid": "334fa9b6-7890-4152-8e55-9653e5d51e0d", 00:19:41.938 "is_configured": true, 00:19:41.938 "data_offset": 2048, 00:19:41.938 "data_size": 63488 00:19:41.938 }, 00:19:41.938 { 00:19:41.938 "name": "BaseBdev2", 00:19:41.938 "uuid": "d1412c78-01ab-4d96-ba32-e215a868785f", 00:19:41.938 "is_configured": true, 00:19:41.938 "data_offset": 2048, 00:19:41.938 "data_size": 63488 00:19:41.938 }, 00:19:41.938 { 00:19:41.938 "name": "BaseBdev3", 00:19:41.938 "uuid": "0c9bc096-3b79-4d24-a681-e2a1032f6089", 00:19:41.938 "is_configured": true, 00:19:41.938 "data_offset": 2048, 00:19:41.938 "data_size": 63488 00:19:41.938 }, 00:19:41.938 { 00:19:41.938 "name": "BaseBdev4", 00:19:41.938 "uuid": "5b210592-20d2-43f5-aeda-2cf9221d50c5", 00:19:41.938 "is_configured": true, 00:19:41.938 "data_offset": 2048, 00:19:41.938 "data_size": 63488 00:19:41.938 } 00:19:41.938 ] 00:19:41.938 }' 00:19:41.938 18:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:41.938 18:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.505 18:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:42.505 18:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:42.505 18:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:42.505 18:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:42.505 18:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:42.505 18:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:42.505 18:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:42.505 18:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:42.764 [2024-07-15 18:34:28.162937] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:42.764 18:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:42.764 "name": "Existed_Raid", 00:19:42.764 "aliases": [ 00:19:42.764 "7d34456a-19ec-4f73-8c4e-af8c67b30eee" 00:19:42.764 ], 00:19:42.764 "product_name": "Raid Volume", 00:19:42.764 "block_size": 512, 00:19:42.764 "num_blocks": 253952, 00:19:42.764 "uuid": "7d34456a-19ec-4f73-8c4e-af8c67b30eee", 00:19:42.764 "assigned_rate_limits": { 00:19:42.764 "rw_ios_per_sec": 0, 00:19:42.764 "rw_mbytes_per_sec": 0, 00:19:42.764 "r_mbytes_per_sec": 0, 00:19:42.764 "w_mbytes_per_sec": 0 00:19:42.764 }, 00:19:42.764 "claimed": false, 00:19:42.764 "zoned": false, 00:19:42.764 "supported_io_types": { 00:19:42.764 "read": true, 00:19:42.764 "write": true, 00:19:42.764 "unmap": true, 00:19:42.764 "flush": true, 00:19:42.764 "reset": true, 00:19:42.764 "nvme_admin": false, 00:19:42.764 "nvme_io": false, 00:19:42.764 "nvme_io_md": false, 00:19:42.764 "write_zeroes": true, 00:19:42.764 "zcopy": false, 00:19:42.764 "get_zone_info": false, 00:19:42.764 "zone_management": false, 00:19:42.764 "zone_append": false, 00:19:42.764 "compare": false, 00:19:42.764 "compare_and_write": false, 00:19:42.764 "abort": false, 00:19:42.764 "seek_hole": false, 00:19:42.764 "seek_data": false, 00:19:42.764 "copy": false, 00:19:42.764 "nvme_iov_md": false 00:19:42.764 }, 00:19:42.764 "memory_domains": [ 00:19:42.764 { 00:19:42.764 "dma_device_id": "system", 00:19:42.764 "dma_device_type": 1 00:19:42.764 }, 00:19:42.764 { 00:19:42.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.764 "dma_device_type": 2 00:19:42.764 }, 00:19:42.764 { 00:19:42.764 "dma_device_id": "system", 00:19:42.764 "dma_device_type": 1 00:19:42.764 }, 00:19:42.764 { 00:19:42.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.764 "dma_device_type": 2 00:19:42.764 }, 00:19:42.764 { 00:19:42.764 "dma_device_id": "system", 00:19:42.764 "dma_device_type": 1 00:19:42.764 }, 00:19:42.764 { 00:19:42.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.764 "dma_device_type": 2 00:19:42.764 }, 00:19:42.764 { 00:19:42.764 "dma_device_id": "system", 00:19:42.764 "dma_device_type": 1 00:19:42.764 }, 00:19:42.764 { 00:19:42.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.764 "dma_device_type": 2 00:19:42.764 } 00:19:42.764 ], 00:19:42.764 "driver_specific": { 00:19:42.764 "raid": { 00:19:42.764 "uuid": "7d34456a-19ec-4f73-8c4e-af8c67b30eee", 00:19:42.764 "strip_size_kb": 64, 00:19:42.764 "state": "online", 00:19:42.764 "raid_level": "concat", 00:19:42.764 "superblock": true, 00:19:42.764 "num_base_bdevs": 4, 00:19:42.764 "num_base_bdevs_discovered": 4, 00:19:42.764 "num_base_bdevs_operational": 4, 00:19:42.764 "base_bdevs_list": [ 00:19:42.764 { 00:19:42.764 "name": "BaseBdev1", 00:19:42.764 "uuid": "334fa9b6-7890-4152-8e55-9653e5d51e0d", 00:19:42.764 "is_configured": true, 00:19:42.764 "data_offset": 2048, 00:19:42.764 "data_size": 63488 00:19:42.764 }, 00:19:42.764 { 00:19:42.764 "name": "BaseBdev2", 00:19:42.764 "uuid": "d1412c78-01ab-4d96-ba32-e215a868785f", 00:19:42.764 "is_configured": true, 00:19:42.764 "data_offset": 2048, 00:19:42.764 "data_size": 63488 00:19:42.764 }, 00:19:42.764 { 00:19:42.764 "name": "BaseBdev3", 00:19:42.764 "uuid": "0c9bc096-3b79-4d24-a681-e2a1032f6089", 00:19:42.764 "is_configured": true, 00:19:42.764 "data_offset": 2048, 00:19:42.764 "data_size": 63488 00:19:42.764 }, 00:19:42.764 { 00:19:42.764 "name": "BaseBdev4", 00:19:42.764 "uuid": "5b210592-20d2-43f5-aeda-2cf9221d50c5", 00:19:42.764 "is_configured": true, 00:19:42.764 "data_offset": 2048, 00:19:42.764 "data_size": 63488 00:19:42.764 } 00:19:42.764 ] 00:19:42.764 } 00:19:42.764 } 00:19:42.764 }' 00:19:42.764 18:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:42.764 18:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:42.764 BaseBdev2 00:19:42.764 BaseBdev3 00:19:42.764 BaseBdev4' 00:19:42.764 18:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:42.764 18:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:42.764 18:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:43.022 18:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:43.022 "name": "BaseBdev1", 00:19:43.022 "aliases": [ 00:19:43.022 "334fa9b6-7890-4152-8e55-9653e5d51e0d" 00:19:43.022 ], 00:19:43.022 "product_name": "Malloc disk", 00:19:43.022 "block_size": 512, 00:19:43.022 "num_blocks": 65536, 00:19:43.022 "uuid": "334fa9b6-7890-4152-8e55-9653e5d51e0d", 00:19:43.022 "assigned_rate_limits": { 00:19:43.022 "rw_ios_per_sec": 0, 00:19:43.022 "rw_mbytes_per_sec": 0, 00:19:43.022 "r_mbytes_per_sec": 0, 00:19:43.022 "w_mbytes_per_sec": 0 00:19:43.022 }, 00:19:43.022 "claimed": true, 00:19:43.022 "claim_type": "exclusive_write", 00:19:43.022 "zoned": false, 00:19:43.022 "supported_io_types": { 00:19:43.022 "read": true, 00:19:43.022 "write": true, 00:19:43.022 "unmap": true, 00:19:43.022 "flush": true, 00:19:43.022 "reset": true, 00:19:43.022 "nvme_admin": false, 00:19:43.022 "nvme_io": false, 00:19:43.022 "nvme_io_md": false, 00:19:43.022 "write_zeroes": true, 00:19:43.022 "zcopy": true, 00:19:43.022 "get_zone_info": false, 00:19:43.022 "zone_management": false, 00:19:43.022 "zone_append": false, 00:19:43.022 "compare": false, 00:19:43.022 "compare_and_write": false, 00:19:43.022 "abort": true, 00:19:43.022 "seek_hole": false, 00:19:43.022 "seek_data": false, 00:19:43.022 "copy": true, 00:19:43.022 "nvme_iov_md": false 00:19:43.022 }, 00:19:43.022 "memory_domains": [ 00:19:43.022 { 00:19:43.022 "dma_device_id": "system", 00:19:43.022 "dma_device_type": 1 00:19:43.022 }, 00:19:43.022 { 00:19:43.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:43.022 "dma_device_type": 2 00:19:43.022 } 00:19:43.022 ], 00:19:43.022 "driver_specific": {} 00:19:43.022 }' 00:19:43.022 18:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:43.022 18:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:43.280 18:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:43.280 18:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:43.280 18:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:43.280 18:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:43.280 18:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:43.280 18:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:43.538 18:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:43.538 18:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:43.538 18:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:43.538 18:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:43.538 18:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:43.538 18:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:43.538 18:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:43.797 18:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:43.797 "name": "BaseBdev2", 00:19:43.797 "aliases": [ 00:19:43.797 "d1412c78-01ab-4d96-ba32-e215a868785f" 00:19:43.797 ], 00:19:43.797 "product_name": "Malloc disk", 00:19:43.797 "block_size": 512, 00:19:43.797 "num_blocks": 65536, 00:19:43.797 "uuid": "d1412c78-01ab-4d96-ba32-e215a868785f", 00:19:43.797 "assigned_rate_limits": { 00:19:43.797 "rw_ios_per_sec": 0, 00:19:43.797 "rw_mbytes_per_sec": 0, 00:19:43.797 "r_mbytes_per_sec": 0, 00:19:43.797 "w_mbytes_per_sec": 0 00:19:43.797 }, 00:19:43.797 "claimed": true, 00:19:43.797 "claim_type": "exclusive_write", 00:19:43.797 "zoned": false, 00:19:43.797 "supported_io_types": { 00:19:43.797 "read": true, 00:19:43.797 "write": true, 00:19:43.797 "unmap": true, 00:19:43.797 "flush": true, 00:19:43.797 "reset": true, 00:19:43.797 "nvme_admin": false, 00:19:43.797 "nvme_io": false, 00:19:43.797 "nvme_io_md": false, 00:19:43.797 "write_zeroes": true, 00:19:43.797 "zcopy": true, 00:19:43.797 "get_zone_info": false, 00:19:43.797 "zone_management": false, 00:19:43.797 "zone_append": false, 00:19:43.797 "compare": false, 00:19:43.797 "compare_and_write": false, 00:19:43.797 "abort": true, 00:19:43.797 "seek_hole": false, 00:19:43.797 "seek_data": false, 00:19:43.797 "copy": true, 00:19:43.797 "nvme_iov_md": false 00:19:43.797 }, 00:19:43.797 "memory_domains": [ 00:19:43.797 { 00:19:43.797 "dma_device_id": "system", 00:19:43.797 "dma_device_type": 1 00:19:43.797 }, 00:19:43.797 { 00:19:43.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:43.797 "dma_device_type": 2 00:19:43.797 } 00:19:43.797 ], 00:19:43.797 "driver_specific": {} 00:19:43.797 }' 00:19:43.797 18:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:43.797 18:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:44.056 18:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:44.056 18:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:44.056 18:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:44.056 18:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:44.056 18:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:44.056 18:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:44.056 18:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:44.056 18:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:44.314 18:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:44.314 18:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:44.314 18:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:44.314 18:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:44.314 18:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:44.572 18:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:44.572 "name": "BaseBdev3", 00:19:44.572 "aliases": [ 00:19:44.572 "0c9bc096-3b79-4d24-a681-e2a1032f6089" 00:19:44.572 ], 00:19:44.572 "product_name": "Malloc disk", 00:19:44.572 "block_size": 512, 00:19:44.572 "num_blocks": 65536, 00:19:44.572 "uuid": "0c9bc096-3b79-4d24-a681-e2a1032f6089", 00:19:44.572 "assigned_rate_limits": { 00:19:44.572 "rw_ios_per_sec": 0, 00:19:44.572 "rw_mbytes_per_sec": 0, 00:19:44.572 "r_mbytes_per_sec": 0, 00:19:44.572 "w_mbytes_per_sec": 0 00:19:44.572 }, 00:19:44.572 "claimed": true, 00:19:44.572 "claim_type": "exclusive_write", 00:19:44.572 "zoned": false, 00:19:44.572 "supported_io_types": { 00:19:44.572 "read": true, 00:19:44.572 "write": true, 00:19:44.572 "unmap": true, 00:19:44.572 "flush": true, 00:19:44.572 "reset": true, 00:19:44.572 "nvme_admin": false, 00:19:44.572 "nvme_io": false, 00:19:44.572 "nvme_io_md": false, 00:19:44.572 "write_zeroes": true, 00:19:44.572 "zcopy": true, 00:19:44.572 "get_zone_info": false, 00:19:44.572 "zone_management": false, 00:19:44.572 "zone_append": false, 00:19:44.572 "compare": false, 00:19:44.572 "compare_and_write": false, 00:19:44.572 "abort": true, 00:19:44.572 "seek_hole": false, 00:19:44.572 "seek_data": false, 00:19:44.572 "copy": true, 00:19:44.572 "nvme_iov_md": false 00:19:44.572 }, 00:19:44.572 "memory_domains": [ 00:19:44.572 { 00:19:44.572 "dma_device_id": "system", 00:19:44.572 "dma_device_type": 1 00:19:44.572 }, 00:19:44.572 { 00:19:44.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:44.572 "dma_device_type": 2 00:19:44.572 } 00:19:44.572 ], 00:19:44.572 "driver_specific": {} 00:19:44.572 }' 00:19:44.572 18:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:44.573 18:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:44.573 18:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:44.573 18:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:44.573 18:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:44.830 18:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:44.830 18:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:44.830 18:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:44.830 18:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:44.830 18:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:44.830 18:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:44.830 18:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:44.830 18:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:44.830 18:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:19:44.830 18:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:45.088 18:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:45.088 "name": "BaseBdev4", 00:19:45.088 "aliases": [ 00:19:45.088 "5b210592-20d2-43f5-aeda-2cf9221d50c5" 00:19:45.088 ], 00:19:45.088 "product_name": "Malloc disk", 00:19:45.088 "block_size": 512, 00:19:45.088 "num_blocks": 65536, 00:19:45.088 "uuid": "5b210592-20d2-43f5-aeda-2cf9221d50c5", 00:19:45.088 "assigned_rate_limits": { 00:19:45.088 "rw_ios_per_sec": 0, 00:19:45.088 "rw_mbytes_per_sec": 0, 00:19:45.088 "r_mbytes_per_sec": 0, 00:19:45.088 "w_mbytes_per_sec": 0 00:19:45.088 }, 00:19:45.088 "claimed": true, 00:19:45.088 "claim_type": "exclusive_write", 00:19:45.088 "zoned": false, 00:19:45.088 "supported_io_types": { 00:19:45.088 "read": true, 00:19:45.088 "write": true, 00:19:45.088 "unmap": true, 00:19:45.088 "flush": true, 00:19:45.088 "reset": true, 00:19:45.088 "nvme_admin": false, 00:19:45.088 "nvme_io": false, 00:19:45.088 "nvme_io_md": false, 00:19:45.088 "write_zeroes": true, 00:19:45.088 "zcopy": true, 00:19:45.088 "get_zone_info": false, 00:19:45.088 "zone_management": false, 00:19:45.088 "zone_append": false, 00:19:45.088 "compare": false, 00:19:45.088 "compare_and_write": false, 00:19:45.088 "abort": true, 00:19:45.088 "seek_hole": false, 00:19:45.088 "seek_data": false, 00:19:45.088 "copy": true, 00:19:45.088 "nvme_iov_md": false 00:19:45.088 }, 00:19:45.088 "memory_domains": [ 00:19:45.088 { 00:19:45.088 "dma_device_id": "system", 00:19:45.088 "dma_device_type": 1 00:19:45.088 }, 00:19:45.088 { 00:19:45.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:45.088 "dma_device_type": 2 00:19:45.088 } 00:19:45.088 ], 00:19:45.088 "driver_specific": {} 00:19:45.088 }' 00:19:45.088 18:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:45.346 18:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:45.346 18:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:45.346 18:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:45.346 18:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:45.346 18:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:45.346 18:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:45.606 18:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:45.606 18:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:45.607 18:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:45.607 18:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:45.607 18:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:45.607 18:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:45.865 [2024-07-15 18:34:31.283219] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:45.865 [2024-07-15 18:34:31.283244] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:45.865 [2024-07-15 18:34:31.283289] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:45.865 18:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:45.865 18:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:19:45.865 18:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:45.865 18:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:19:45.865 18:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:19:45.865 18:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:19:45.865 18:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:45.865 18:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:19:45.865 18:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:45.865 18:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:45.865 18:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:45.865 18:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:45.865 18:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:45.865 18:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:45.865 18:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:45.865 18:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.865 18:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:46.123 18:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:46.123 "name": "Existed_Raid", 00:19:46.123 "uuid": "7d34456a-19ec-4f73-8c4e-af8c67b30eee", 00:19:46.123 "strip_size_kb": 64, 00:19:46.123 "state": "offline", 00:19:46.123 "raid_level": "concat", 00:19:46.123 "superblock": true, 00:19:46.123 "num_base_bdevs": 4, 00:19:46.124 "num_base_bdevs_discovered": 3, 00:19:46.124 "num_base_bdevs_operational": 3, 00:19:46.124 "base_bdevs_list": [ 00:19:46.124 { 00:19:46.124 "name": null, 00:19:46.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.124 "is_configured": false, 00:19:46.124 "data_offset": 2048, 00:19:46.124 "data_size": 63488 00:19:46.124 }, 00:19:46.124 { 00:19:46.124 "name": "BaseBdev2", 00:19:46.124 "uuid": "d1412c78-01ab-4d96-ba32-e215a868785f", 00:19:46.124 "is_configured": true, 00:19:46.124 "data_offset": 2048, 00:19:46.124 "data_size": 63488 00:19:46.124 }, 00:19:46.124 { 00:19:46.124 "name": "BaseBdev3", 00:19:46.124 "uuid": "0c9bc096-3b79-4d24-a681-e2a1032f6089", 00:19:46.124 "is_configured": true, 00:19:46.124 "data_offset": 2048, 00:19:46.124 "data_size": 63488 00:19:46.124 }, 00:19:46.124 { 00:19:46.124 "name": "BaseBdev4", 00:19:46.124 "uuid": "5b210592-20d2-43f5-aeda-2cf9221d50c5", 00:19:46.124 "is_configured": true, 00:19:46.124 "data_offset": 2048, 00:19:46.124 "data_size": 63488 00:19:46.124 } 00:19:46.124 ] 00:19:46.124 }' 00:19:46.124 18:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:46.124 18:34:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.690 18:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:46.690 18:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:46.690 18:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.690 18:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:46.949 18:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:46.949 18:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:46.949 18:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:47.517 [2024-07-15 18:34:32.912743] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:47.517 18:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:47.517 18:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:47.517 18:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.517 18:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:47.776 18:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:47.776 18:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:47.776 18:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:48.342 [2024-07-15 18:34:33.673372] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:48.342 18:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:48.342 18:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:48.342 18:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.342 18:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:48.601 18:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:48.601 18:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:48.601 18:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:48.859 [2024-07-15 18:34:34.205407] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:48.859 [2024-07-15 18:34:34.205450] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1430490 name Existed_Raid, state offline 00:19:48.859 18:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:48.859 18:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:48.859 18:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.859 18:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:49.117 18:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:49.117 18:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:49.117 18:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:19:49.117 18:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:49.117 18:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:49.117 18:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:49.375 BaseBdev2 00:19:49.375 18:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:49.375 18:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:49.375 18:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:49.375 18:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:49.375 18:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:49.375 18:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:49.375 18:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:49.633 18:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:49.892 [ 00:19:49.892 { 00:19:49.892 "name": "BaseBdev2", 00:19:49.892 "aliases": [ 00:19:49.892 "c9b30731-9558-4157-a344-0506cd7d2e5d" 00:19:49.892 ], 00:19:49.892 "product_name": "Malloc disk", 00:19:49.892 "block_size": 512, 00:19:49.892 "num_blocks": 65536, 00:19:49.892 "uuid": "c9b30731-9558-4157-a344-0506cd7d2e5d", 00:19:49.892 "assigned_rate_limits": { 00:19:49.892 "rw_ios_per_sec": 0, 00:19:49.892 "rw_mbytes_per_sec": 0, 00:19:49.892 "r_mbytes_per_sec": 0, 00:19:49.892 "w_mbytes_per_sec": 0 00:19:49.892 }, 00:19:49.892 "claimed": false, 00:19:49.892 "zoned": false, 00:19:49.892 "supported_io_types": { 00:19:49.892 "read": true, 00:19:49.892 "write": true, 00:19:49.892 "unmap": true, 00:19:49.892 "flush": true, 00:19:49.892 "reset": true, 00:19:49.892 "nvme_admin": false, 00:19:49.892 "nvme_io": false, 00:19:49.892 "nvme_io_md": false, 00:19:49.892 "write_zeroes": true, 00:19:49.892 "zcopy": true, 00:19:49.892 "get_zone_info": false, 00:19:49.892 "zone_management": false, 00:19:49.892 "zone_append": false, 00:19:49.892 "compare": false, 00:19:49.892 "compare_and_write": false, 00:19:49.892 "abort": true, 00:19:49.892 "seek_hole": false, 00:19:49.892 "seek_data": false, 00:19:49.892 "copy": true, 00:19:49.892 "nvme_iov_md": false 00:19:49.892 }, 00:19:49.892 "memory_domains": [ 00:19:49.892 { 00:19:49.892 "dma_device_id": "system", 00:19:49.892 "dma_device_type": 1 00:19:49.892 }, 00:19:49.892 { 00:19:49.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.892 "dma_device_type": 2 00:19:49.892 } 00:19:49.892 ], 00:19:49.892 "driver_specific": {} 00:19:49.892 } 00:19:49.892 ] 00:19:49.892 18:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:49.892 18:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:49.892 18:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:49.892 18:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:50.150 BaseBdev3 00:19:50.150 18:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:50.150 18:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:50.150 18:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:50.150 18:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:50.150 18:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:50.150 18:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:50.150 18:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:50.408 18:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:50.667 [ 00:19:50.667 { 00:19:50.667 "name": "BaseBdev3", 00:19:50.667 "aliases": [ 00:19:50.667 "75e4239c-1945-45f1-bbb5-49df7c11a1c2" 00:19:50.667 ], 00:19:50.667 "product_name": "Malloc disk", 00:19:50.667 "block_size": 512, 00:19:50.667 "num_blocks": 65536, 00:19:50.667 "uuid": "75e4239c-1945-45f1-bbb5-49df7c11a1c2", 00:19:50.667 "assigned_rate_limits": { 00:19:50.667 "rw_ios_per_sec": 0, 00:19:50.667 "rw_mbytes_per_sec": 0, 00:19:50.667 "r_mbytes_per_sec": 0, 00:19:50.667 "w_mbytes_per_sec": 0 00:19:50.667 }, 00:19:50.667 "claimed": false, 00:19:50.667 "zoned": false, 00:19:50.667 "supported_io_types": { 00:19:50.667 "read": true, 00:19:50.667 "write": true, 00:19:50.667 "unmap": true, 00:19:50.667 "flush": true, 00:19:50.667 "reset": true, 00:19:50.667 "nvme_admin": false, 00:19:50.667 "nvme_io": false, 00:19:50.667 "nvme_io_md": false, 00:19:50.667 "write_zeroes": true, 00:19:50.667 "zcopy": true, 00:19:50.667 "get_zone_info": false, 00:19:50.667 "zone_management": false, 00:19:50.667 "zone_append": false, 00:19:50.667 "compare": false, 00:19:50.667 "compare_and_write": false, 00:19:50.667 "abort": true, 00:19:50.667 "seek_hole": false, 00:19:50.667 "seek_data": false, 00:19:50.667 "copy": true, 00:19:50.667 "nvme_iov_md": false 00:19:50.667 }, 00:19:50.667 "memory_domains": [ 00:19:50.667 { 00:19:50.667 "dma_device_id": "system", 00:19:50.667 "dma_device_type": 1 00:19:50.667 }, 00:19:50.667 { 00:19:50.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.667 "dma_device_type": 2 00:19:50.667 } 00:19:50.667 ], 00:19:50.667 "driver_specific": {} 00:19:50.667 } 00:19:50.667 ] 00:19:50.667 18:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:50.667 18:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:50.667 18:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:50.667 18:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:50.926 BaseBdev4 00:19:50.926 18:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:19:50.926 18:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:19:50.926 18:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:50.926 18:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:50.926 18:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:50.926 18:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:50.926 18:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:51.185 18:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:51.185 [ 00:19:51.185 { 00:19:51.185 "name": "BaseBdev4", 00:19:51.185 "aliases": [ 00:19:51.185 "aa5b621a-a90d-490c-af20-e583b69c58ae" 00:19:51.185 ], 00:19:51.185 "product_name": "Malloc disk", 00:19:51.185 "block_size": 512, 00:19:51.185 "num_blocks": 65536, 00:19:51.185 "uuid": "aa5b621a-a90d-490c-af20-e583b69c58ae", 00:19:51.185 "assigned_rate_limits": { 00:19:51.185 "rw_ios_per_sec": 0, 00:19:51.185 "rw_mbytes_per_sec": 0, 00:19:51.185 "r_mbytes_per_sec": 0, 00:19:51.185 "w_mbytes_per_sec": 0 00:19:51.185 }, 00:19:51.185 "claimed": false, 00:19:51.185 "zoned": false, 00:19:51.185 "supported_io_types": { 00:19:51.185 "read": true, 00:19:51.185 "write": true, 00:19:51.185 "unmap": true, 00:19:51.185 "flush": true, 00:19:51.185 "reset": true, 00:19:51.185 "nvme_admin": false, 00:19:51.185 "nvme_io": false, 00:19:51.185 "nvme_io_md": false, 00:19:51.185 "write_zeroes": true, 00:19:51.185 "zcopy": true, 00:19:51.185 "get_zone_info": false, 00:19:51.185 "zone_management": false, 00:19:51.185 "zone_append": false, 00:19:51.185 "compare": false, 00:19:51.185 "compare_and_write": false, 00:19:51.185 "abort": true, 00:19:51.185 "seek_hole": false, 00:19:51.185 "seek_data": false, 00:19:51.185 "copy": true, 00:19:51.185 "nvme_iov_md": false 00:19:51.185 }, 00:19:51.185 "memory_domains": [ 00:19:51.185 { 00:19:51.185 "dma_device_id": "system", 00:19:51.185 "dma_device_type": 1 00:19:51.185 }, 00:19:51.185 { 00:19:51.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:51.185 "dma_device_type": 2 00:19:51.185 } 00:19:51.185 ], 00:19:51.185 "driver_specific": {} 00:19:51.185 } 00:19:51.185 ] 00:19:51.443 18:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:51.443 18:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:51.443 18:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:51.443 18:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:51.443 [2024-07-15 18:34:36.984570] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:51.443 [2024-07-15 18:34:36.984607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:51.443 [2024-07-15 18:34:36.984627] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:51.443 [2024-07-15 18:34:36.986026] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:51.443 [2024-07-15 18:34:36.986070] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:51.702 18:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:51.702 18:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:51.702 18:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:51.702 18:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:51.702 18:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:51.702 18:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:51.702 18:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:51.702 18:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:51.702 18:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:51.702 18:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:51.702 18:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.702 18:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:51.960 18:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:51.960 "name": "Existed_Raid", 00:19:51.960 "uuid": "ba736675-d270-475f-8db0-f480571fda20", 00:19:51.960 "strip_size_kb": 64, 00:19:51.960 "state": "configuring", 00:19:51.960 "raid_level": "concat", 00:19:51.960 "superblock": true, 00:19:51.960 "num_base_bdevs": 4, 00:19:51.960 "num_base_bdevs_discovered": 3, 00:19:51.960 "num_base_bdevs_operational": 4, 00:19:51.960 "base_bdevs_list": [ 00:19:51.960 { 00:19:51.960 "name": "BaseBdev1", 00:19:51.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.960 "is_configured": false, 00:19:51.960 "data_offset": 0, 00:19:51.960 "data_size": 0 00:19:51.960 }, 00:19:51.960 { 00:19:51.960 "name": "BaseBdev2", 00:19:51.960 "uuid": "c9b30731-9558-4157-a344-0506cd7d2e5d", 00:19:51.960 "is_configured": true, 00:19:51.960 "data_offset": 2048, 00:19:51.960 "data_size": 63488 00:19:51.960 }, 00:19:51.960 { 00:19:51.960 "name": "BaseBdev3", 00:19:51.960 "uuid": "75e4239c-1945-45f1-bbb5-49df7c11a1c2", 00:19:51.960 "is_configured": true, 00:19:51.960 "data_offset": 2048, 00:19:51.960 "data_size": 63488 00:19:51.960 }, 00:19:51.960 { 00:19:51.960 "name": "BaseBdev4", 00:19:51.960 "uuid": "aa5b621a-a90d-490c-af20-e583b69c58ae", 00:19:51.960 "is_configured": true, 00:19:51.960 "data_offset": 2048, 00:19:51.960 "data_size": 63488 00:19:51.960 } 00:19:51.960 ] 00:19:51.960 }' 00:19:51.960 18:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:51.960 18:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.525 18:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:52.784 [2024-07-15 18:34:38.127615] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:52.784 18:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:52.784 18:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:52.784 18:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:52.784 18:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:52.784 18:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:52.784 18:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:52.784 18:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:52.784 18:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:52.784 18:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:52.784 18:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:52.784 18:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.784 18:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:53.042 18:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:53.042 "name": "Existed_Raid", 00:19:53.042 "uuid": "ba736675-d270-475f-8db0-f480571fda20", 00:19:53.042 "strip_size_kb": 64, 00:19:53.042 "state": "configuring", 00:19:53.042 "raid_level": "concat", 00:19:53.042 "superblock": true, 00:19:53.042 "num_base_bdevs": 4, 00:19:53.042 "num_base_bdevs_discovered": 2, 00:19:53.042 "num_base_bdevs_operational": 4, 00:19:53.042 "base_bdevs_list": [ 00:19:53.042 { 00:19:53.042 "name": "BaseBdev1", 00:19:53.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.042 "is_configured": false, 00:19:53.042 "data_offset": 0, 00:19:53.042 "data_size": 0 00:19:53.042 }, 00:19:53.042 { 00:19:53.042 "name": null, 00:19:53.042 "uuid": "c9b30731-9558-4157-a344-0506cd7d2e5d", 00:19:53.042 "is_configured": false, 00:19:53.042 "data_offset": 2048, 00:19:53.042 "data_size": 63488 00:19:53.042 }, 00:19:53.042 { 00:19:53.042 "name": "BaseBdev3", 00:19:53.042 "uuid": "75e4239c-1945-45f1-bbb5-49df7c11a1c2", 00:19:53.042 "is_configured": true, 00:19:53.042 "data_offset": 2048, 00:19:53.042 "data_size": 63488 00:19:53.042 }, 00:19:53.042 { 00:19:53.042 "name": "BaseBdev4", 00:19:53.042 "uuid": "aa5b621a-a90d-490c-af20-e583b69c58ae", 00:19:53.042 "is_configured": true, 00:19:53.042 "data_offset": 2048, 00:19:53.042 "data_size": 63488 00:19:53.042 } 00:19:53.042 ] 00:19:53.042 }' 00:19:53.042 18:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:53.042 18:34:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.607 18:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:53.607 18:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.865 18:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:53.865 18:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:54.124 [2024-07-15 18:34:39.534528] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:54.124 BaseBdev1 00:19:54.124 18:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:54.124 18:34:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:54.124 18:34:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:54.124 18:34:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:54.124 18:34:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:54.124 18:34:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:54.124 18:34:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:54.381 18:34:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:54.639 [ 00:19:54.639 { 00:19:54.639 "name": "BaseBdev1", 00:19:54.639 "aliases": [ 00:19:54.639 "197cd244-7906-4e40-b90a-f7ad68a5f734" 00:19:54.639 ], 00:19:54.639 "product_name": "Malloc disk", 00:19:54.639 "block_size": 512, 00:19:54.639 "num_blocks": 65536, 00:19:54.639 "uuid": "197cd244-7906-4e40-b90a-f7ad68a5f734", 00:19:54.639 "assigned_rate_limits": { 00:19:54.639 "rw_ios_per_sec": 0, 00:19:54.639 "rw_mbytes_per_sec": 0, 00:19:54.639 "r_mbytes_per_sec": 0, 00:19:54.639 "w_mbytes_per_sec": 0 00:19:54.639 }, 00:19:54.639 "claimed": true, 00:19:54.639 "claim_type": "exclusive_write", 00:19:54.639 "zoned": false, 00:19:54.639 "supported_io_types": { 00:19:54.639 "read": true, 00:19:54.639 "write": true, 00:19:54.639 "unmap": true, 00:19:54.639 "flush": true, 00:19:54.639 "reset": true, 00:19:54.639 "nvme_admin": false, 00:19:54.639 "nvme_io": false, 00:19:54.639 "nvme_io_md": false, 00:19:54.639 "write_zeroes": true, 00:19:54.639 "zcopy": true, 00:19:54.639 "get_zone_info": false, 00:19:54.639 "zone_management": false, 00:19:54.639 "zone_append": false, 00:19:54.639 "compare": false, 00:19:54.639 "compare_and_write": false, 00:19:54.639 "abort": true, 00:19:54.639 "seek_hole": false, 00:19:54.639 "seek_data": false, 00:19:54.639 "copy": true, 00:19:54.639 "nvme_iov_md": false 00:19:54.639 }, 00:19:54.639 "memory_domains": [ 00:19:54.639 { 00:19:54.639 "dma_device_id": "system", 00:19:54.639 "dma_device_type": 1 00:19:54.639 }, 00:19:54.639 { 00:19:54.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.639 "dma_device_type": 2 00:19:54.639 } 00:19:54.639 ], 00:19:54.639 "driver_specific": {} 00:19:54.639 } 00:19:54.639 ] 00:19:54.639 18:34:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:54.639 18:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:54.639 18:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:54.639 18:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:54.639 18:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:54.639 18:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:54.639 18:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:54.639 18:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:54.639 18:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:54.639 18:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:54.640 18:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:54.640 18:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.640 18:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:54.898 18:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:54.898 "name": "Existed_Raid", 00:19:54.898 "uuid": "ba736675-d270-475f-8db0-f480571fda20", 00:19:54.898 "strip_size_kb": 64, 00:19:54.898 "state": "configuring", 00:19:54.898 "raid_level": "concat", 00:19:54.898 "superblock": true, 00:19:54.898 "num_base_bdevs": 4, 00:19:54.898 "num_base_bdevs_discovered": 3, 00:19:54.898 "num_base_bdevs_operational": 4, 00:19:54.898 "base_bdevs_list": [ 00:19:54.898 { 00:19:54.898 "name": "BaseBdev1", 00:19:54.898 "uuid": "197cd244-7906-4e40-b90a-f7ad68a5f734", 00:19:54.898 "is_configured": true, 00:19:54.898 "data_offset": 2048, 00:19:54.898 "data_size": 63488 00:19:54.898 }, 00:19:54.898 { 00:19:54.898 "name": null, 00:19:54.898 "uuid": "c9b30731-9558-4157-a344-0506cd7d2e5d", 00:19:54.898 "is_configured": false, 00:19:54.898 "data_offset": 2048, 00:19:54.898 "data_size": 63488 00:19:54.898 }, 00:19:54.898 { 00:19:54.898 "name": "BaseBdev3", 00:19:54.898 "uuid": "75e4239c-1945-45f1-bbb5-49df7c11a1c2", 00:19:54.898 "is_configured": true, 00:19:54.898 "data_offset": 2048, 00:19:54.898 "data_size": 63488 00:19:54.898 }, 00:19:54.898 { 00:19:54.898 "name": "BaseBdev4", 00:19:54.898 "uuid": "aa5b621a-a90d-490c-af20-e583b69c58ae", 00:19:54.898 "is_configured": true, 00:19:54.898 "data_offset": 2048, 00:19:54.898 "data_size": 63488 00:19:54.898 } 00:19:54.898 ] 00:19:54.898 }' 00:19:54.898 18:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:54.898 18:34:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.464 18:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:55.464 18:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.722 18:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:55.723 18:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:55.980 [2024-07-15 18:34:41.411630] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:55.980 18:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:55.980 18:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:55.980 18:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:55.980 18:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:55.980 18:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:55.980 18:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:55.980 18:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:55.980 18:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:55.980 18:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:55.980 18:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:55.980 18:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.980 18:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:56.239 18:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:56.239 "name": "Existed_Raid", 00:19:56.239 "uuid": "ba736675-d270-475f-8db0-f480571fda20", 00:19:56.239 "strip_size_kb": 64, 00:19:56.239 "state": "configuring", 00:19:56.239 "raid_level": "concat", 00:19:56.239 "superblock": true, 00:19:56.239 "num_base_bdevs": 4, 00:19:56.239 "num_base_bdevs_discovered": 2, 00:19:56.239 "num_base_bdevs_operational": 4, 00:19:56.239 "base_bdevs_list": [ 00:19:56.239 { 00:19:56.239 "name": "BaseBdev1", 00:19:56.239 "uuid": "197cd244-7906-4e40-b90a-f7ad68a5f734", 00:19:56.239 "is_configured": true, 00:19:56.239 "data_offset": 2048, 00:19:56.239 "data_size": 63488 00:19:56.239 }, 00:19:56.239 { 00:19:56.239 "name": null, 00:19:56.239 "uuid": "c9b30731-9558-4157-a344-0506cd7d2e5d", 00:19:56.239 "is_configured": false, 00:19:56.239 "data_offset": 2048, 00:19:56.239 "data_size": 63488 00:19:56.239 }, 00:19:56.239 { 00:19:56.239 "name": null, 00:19:56.239 "uuid": "75e4239c-1945-45f1-bbb5-49df7c11a1c2", 00:19:56.239 "is_configured": false, 00:19:56.239 "data_offset": 2048, 00:19:56.239 "data_size": 63488 00:19:56.239 }, 00:19:56.239 { 00:19:56.239 "name": "BaseBdev4", 00:19:56.239 "uuid": "aa5b621a-a90d-490c-af20-e583b69c58ae", 00:19:56.239 "is_configured": true, 00:19:56.239 "data_offset": 2048, 00:19:56.239 "data_size": 63488 00:19:56.239 } 00:19:56.239 ] 00:19:56.239 }' 00:19:56.239 18:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:56.239 18:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.805 18:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.805 18:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:57.062 18:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:57.062 18:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:57.320 [2024-07-15 18:34:42.791371] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:57.320 18:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:57.320 18:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:57.320 18:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:57.320 18:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:57.320 18:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:57.320 18:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:57.320 18:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:57.320 18:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:57.320 18:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:57.320 18:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:57.320 18:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.320 18:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:57.578 18:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:57.578 "name": "Existed_Raid", 00:19:57.578 "uuid": "ba736675-d270-475f-8db0-f480571fda20", 00:19:57.578 "strip_size_kb": 64, 00:19:57.578 "state": "configuring", 00:19:57.578 "raid_level": "concat", 00:19:57.578 "superblock": true, 00:19:57.578 "num_base_bdevs": 4, 00:19:57.578 "num_base_bdevs_discovered": 3, 00:19:57.578 "num_base_bdevs_operational": 4, 00:19:57.578 "base_bdevs_list": [ 00:19:57.578 { 00:19:57.578 "name": "BaseBdev1", 00:19:57.578 "uuid": "197cd244-7906-4e40-b90a-f7ad68a5f734", 00:19:57.578 "is_configured": true, 00:19:57.578 "data_offset": 2048, 00:19:57.578 "data_size": 63488 00:19:57.578 }, 00:19:57.578 { 00:19:57.578 "name": null, 00:19:57.578 "uuid": "c9b30731-9558-4157-a344-0506cd7d2e5d", 00:19:57.578 "is_configured": false, 00:19:57.578 "data_offset": 2048, 00:19:57.578 "data_size": 63488 00:19:57.578 }, 00:19:57.578 { 00:19:57.578 "name": "BaseBdev3", 00:19:57.578 "uuid": "75e4239c-1945-45f1-bbb5-49df7c11a1c2", 00:19:57.578 "is_configured": true, 00:19:57.578 "data_offset": 2048, 00:19:57.578 "data_size": 63488 00:19:57.578 }, 00:19:57.578 { 00:19:57.578 "name": "BaseBdev4", 00:19:57.578 "uuid": "aa5b621a-a90d-490c-af20-e583b69c58ae", 00:19:57.578 "is_configured": true, 00:19:57.578 "data_offset": 2048, 00:19:57.578 "data_size": 63488 00:19:57.578 } 00:19:57.578 ] 00:19:57.578 }' 00:19:57.578 18:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:57.578 18:34:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.511 18:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.511 18:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:58.511 18:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:58.511 18:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:58.769 [2024-07-15 18:34:44.211229] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:58.769 18:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:58.769 18:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:58.769 18:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:58.769 18:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:58.769 18:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:58.769 18:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:58.769 18:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:58.769 18:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:58.770 18:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:58.770 18:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:58.770 18:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.770 18:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:59.028 18:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:59.028 "name": "Existed_Raid", 00:19:59.028 "uuid": "ba736675-d270-475f-8db0-f480571fda20", 00:19:59.028 "strip_size_kb": 64, 00:19:59.028 "state": "configuring", 00:19:59.028 "raid_level": "concat", 00:19:59.028 "superblock": true, 00:19:59.028 "num_base_bdevs": 4, 00:19:59.028 "num_base_bdevs_discovered": 2, 00:19:59.028 "num_base_bdevs_operational": 4, 00:19:59.028 "base_bdevs_list": [ 00:19:59.028 { 00:19:59.028 "name": null, 00:19:59.028 "uuid": "197cd244-7906-4e40-b90a-f7ad68a5f734", 00:19:59.028 "is_configured": false, 00:19:59.028 "data_offset": 2048, 00:19:59.028 "data_size": 63488 00:19:59.028 }, 00:19:59.028 { 00:19:59.028 "name": null, 00:19:59.028 "uuid": "c9b30731-9558-4157-a344-0506cd7d2e5d", 00:19:59.028 "is_configured": false, 00:19:59.028 "data_offset": 2048, 00:19:59.028 "data_size": 63488 00:19:59.028 }, 00:19:59.028 { 00:19:59.028 "name": "BaseBdev3", 00:19:59.028 "uuid": "75e4239c-1945-45f1-bbb5-49df7c11a1c2", 00:19:59.028 "is_configured": true, 00:19:59.028 "data_offset": 2048, 00:19:59.028 "data_size": 63488 00:19:59.028 }, 00:19:59.028 { 00:19:59.028 "name": "BaseBdev4", 00:19:59.028 "uuid": "aa5b621a-a90d-490c-af20-e583b69c58ae", 00:19:59.028 "is_configured": true, 00:19:59.028 "data_offset": 2048, 00:19:59.028 "data_size": 63488 00:19:59.028 } 00:19:59.028 ] 00:19:59.028 }' 00:19:59.028 18:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:59.028 18:34:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:59.594 18:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.594 18:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:59.852 18:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:59.852 18:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:00.110 [2024-07-15 18:34:45.621680] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:00.110 18:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:00.110 18:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:00.110 18:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:00.110 18:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:00.110 18:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:00.110 18:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:00.110 18:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:00.110 18:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:00.110 18:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:00.110 18:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:00.110 18:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.110 18:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:00.369 18:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:00.369 "name": "Existed_Raid", 00:20:00.369 "uuid": "ba736675-d270-475f-8db0-f480571fda20", 00:20:00.369 "strip_size_kb": 64, 00:20:00.369 "state": "configuring", 00:20:00.369 "raid_level": "concat", 00:20:00.369 "superblock": true, 00:20:00.369 "num_base_bdevs": 4, 00:20:00.369 "num_base_bdevs_discovered": 3, 00:20:00.369 "num_base_bdevs_operational": 4, 00:20:00.369 "base_bdevs_list": [ 00:20:00.369 { 00:20:00.369 "name": null, 00:20:00.369 "uuid": "197cd244-7906-4e40-b90a-f7ad68a5f734", 00:20:00.369 "is_configured": false, 00:20:00.369 "data_offset": 2048, 00:20:00.369 "data_size": 63488 00:20:00.369 }, 00:20:00.369 { 00:20:00.369 "name": "BaseBdev2", 00:20:00.369 "uuid": "c9b30731-9558-4157-a344-0506cd7d2e5d", 00:20:00.369 "is_configured": true, 00:20:00.369 "data_offset": 2048, 00:20:00.369 "data_size": 63488 00:20:00.369 }, 00:20:00.369 { 00:20:00.369 "name": "BaseBdev3", 00:20:00.369 "uuid": "75e4239c-1945-45f1-bbb5-49df7c11a1c2", 00:20:00.369 "is_configured": true, 00:20:00.369 "data_offset": 2048, 00:20:00.369 "data_size": 63488 00:20:00.369 }, 00:20:00.369 { 00:20:00.369 "name": "BaseBdev4", 00:20:00.369 "uuid": "aa5b621a-a90d-490c-af20-e583b69c58ae", 00:20:00.369 "is_configured": true, 00:20:00.369 "data_offset": 2048, 00:20:00.369 "data_size": 63488 00:20:00.369 } 00:20:00.369 ] 00:20:00.369 }' 00:20:00.369 18:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:00.369 18:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:01.305 18:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.305 18:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:01.305 18:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:01.305 18:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.305 18:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:01.564 18:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 197cd244-7906-4e40-b90a-f7ad68a5f734 00:20:01.824 [2024-07-15 18:34:47.269444] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:01.824 [2024-07-15 18:34:47.269601] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1426670 00:20:01.824 [2024-07-15 18:34:47.269612] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:01.824 [2024-07-15 18:34:47.269798] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x142fb70 00:20:01.824 [2024-07-15 18:34:47.269918] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1426670 00:20:01.824 [2024-07-15 18:34:47.269927] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1426670 00:20:01.824 [2024-07-15 18:34:47.270036] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:01.824 NewBaseBdev 00:20:01.824 18:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:01.824 18:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:20:01.824 18:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:01.824 18:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:01.824 18:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:01.824 18:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:01.824 18:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:02.083 18:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:02.376 [ 00:20:02.376 { 00:20:02.376 "name": "NewBaseBdev", 00:20:02.376 "aliases": [ 00:20:02.376 "197cd244-7906-4e40-b90a-f7ad68a5f734" 00:20:02.376 ], 00:20:02.376 "product_name": "Malloc disk", 00:20:02.376 "block_size": 512, 00:20:02.376 "num_blocks": 65536, 00:20:02.376 "uuid": "197cd244-7906-4e40-b90a-f7ad68a5f734", 00:20:02.376 "assigned_rate_limits": { 00:20:02.376 "rw_ios_per_sec": 0, 00:20:02.376 "rw_mbytes_per_sec": 0, 00:20:02.376 "r_mbytes_per_sec": 0, 00:20:02.376 "w_mbytes_per_sec": 0 00:20:02.376 }, 00:20:02.376 "claimed": true, 00:20:02.376 "claim_type": "exclusive_write", 00:20:02.376 "zoned": false, 00:20:02.376 "supported_io_types": { 00:20:02.376 "read": true, 00:20:02.376 "write": true, 00:20:02.376 "unmap": true, 00:20:02.376 "flush": true, 00:20:02.376 "reset": true, 00:20:02.376 "nvme_admin": false, 00:20:02.376 "nvme_io": false, 00:20:02.376 "nvme_io_md": false, 00:20:02.376 "write_zeroes": true, 00:20:02.376 "zcopy": true, 00:20:02.376 "get_zone_info": false, 00:20:02.376 "zone_management": false, 00:20:02.376 "zone_append": false, 00:20:02.376 "compare": false, 00:20:02.376 "compare_and_write": false, 00:20:02.376 "abort": true, 00:20:02.376 "seek_hole": false, 00:20:02.376 "seek_data": false, 00:20:02.376 "copy": true, 00:20:02.376 "nvme_iov_md": false 00:20:02.376 }, 00:20:02.376 "memory_domains": [ 00:20:02.376 { 00:20:02.376 "dma_device_id": "system", 00:20:02.376 "dma_device_type": 1 00:20:02.376 }, 00:20:02.376 { 00:20:02.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.376 "dma_device_type": 2 00:20:02.376 } 00:20:02.376 ], 00:20:02.376 "driver_specific": {} 00:20:02.376 } 00:20:02.376 ] 00:20:02.376 18:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:02.376 18:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:20:02.376 18:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:02.376 18:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:02.376 18:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:02.376 18:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:02.376 18:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:02.376 18:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:02.376 18:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:02.376 18:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:02.376 18:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:02.376 18:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.376 18:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:02.640 18:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:02.640 "name": "Existed_Raid", 00:20:02.640 "uuid": "ba736675-d270-475f-8db0-f480571fda20", 00:20:02.640 "strip_size_kb": 64, 00:20:02.640 "state": "online", 00:20:02.640 "raid_level": "concat", 00:20:02.640 "superblock": true, 00:20:02.640 "num_base_bdevs": 4, 00:20:02.640 "num_base_bdevs_discovered": 4, 00:20:02.640 "num_base_bdevs_operational": 4, 00:20:02.640 "base_bdevs_list": [ 00:20:02.640 { 00:20:02.640 "name": "NewBaseBdev", 00:20:02.640 "uuid": "197cd244-7906-4e40-b90a-f7ad68a5f734", 00:20:02.640 "is_configured": true, 00:20:02.640 "data_offset": 2048, 00:20:02.640 "data_size": 63488 00:20:02.640 }, 00:20:02.640 { 00:20:02.640 "name": "BaseBdev2", 00:20:02.640 "uuid": "c9b30731-9558-4157-a344-0506cd7d2e5d", 00:20:02.640 "is_configured": true, 00:20:02.640 "data_offset": 2048, 00:20:02.640 "data_size": 63488 00:20:02.640 }, 00:20:02.640 { 00:20:02.640 "name": "BaseBdev3", 00:20:02.640 "uuid": "75e4239c-1945-45f1-bbb5-49df7c11a1c2", 00:20:02.640 "is_configured": true, 00:20:02.640 "data_offset": 2048, 00:20:02.640 "data_size": 63488 00:20:02.640 }, 00:20:02.640 { 00:20:02.640 "name": "BaseBdev4", 00:20:02.640 "uuid": "aa5b621a-a90d-490c-af20-e583b69c58ae", 00:20:02.640 "is_configured": true, 00:20:02.640 "data_offset": 2048, 00:20:02.640 "data_size": 63488 00:20:02.640 } 00:20:02.640 ] 00:20:02.640 }' 00:20:02.640 18:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:02.640 18:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:03.207 18:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:03.207 18:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:03.207 18:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:03.207 18:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:03.207 18:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:03.207 18:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:03.207 18:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:03.207 18:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:03.466 [2024-07-15 18:34:48.882160] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:03.466 18:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:03.466 "name": "Existed_Raid", 00:20:03.466 "aliases": [ 00:20:03.466 "ba736675-d270-475f-8db0-f480571fda20" 00:20:03.466 ], 00:20:03.466 "product_name": "Raid Volume", 00:20:03.466 "block_size": 512, 00:20:03.466 "num_blocks": 253952, 00:20:03.466 "uuid": "ba736675-d270-475f-8db0-f480571fda20", 00:20:03.466 "assigned_rate_limits": { 00:20:03.466 "rw_ios_per_sec": 0, 00:20:03.466 "rw_mbytes_per_sec": 0, 00:20:03.466 "r_mbytes_per_sec": 0, 00:20:03.466 "w_mbytes_per_sec": 0 00:20:03.466 }, 00:20:03.466 "claimed": false, 00:20:03.466 "zoned": false, 00:20:03.466 "supported_io_types": { 00:20:03.466 "read": true, 00:20:03.466 "write": true, 00:20:03.467 "unmap": true, 00:20:03.467 "flush": true, 00:20:03.467 "reset": true, 00:20:03.467 "nvme_admin": false, 00:20:03.467 "nvme_io": false, 00:20:03.467 "nvme_io_md": false, 00:20:03.467 "write_zeroes": true, 00:20:03.467 "zcopy": false, 00:20:03.467 "get_zone_info": false, 00:20:03.467 "zone_management": false, 00:20:03.467 "zone_append": false, 00:20:03.467 "compare": false, 00:20:03.467 "compare_and_write": false, 00:20:03.467 "abort": false, 00:20:03.467 "seek_hole": false, 00:20:03.467 "seek_data": false, 00:20:03.467 "copy": false, 00:20:03.467 "nvme_iov_md": false 00:20:03.467 }, 00:20:03.467 "memory_domains": [ 00:20:03.467 { 00:20:03.467 "dma_device_id": "system", 00:20:03.467 "dma_device_type": 1 00:20:03.467 }, 00:20:03.467 { 00:20:03.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.467 "dma_device_type": 2 00:20:03.467 }, 00:20:03.467 { 00:20:03.467 "dma_device_id": "system", 00:20:03.467 "dma_device_type": 1 00:20:03.467 }, 00:20:03.467 { 00:20:03.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.467 "dma_device_type": 2 00:20:03.467 }, 00:20:03.467 { 00:20:03.467 "dma_device_id": "system", 00:20:03.467 "dma_device_type": 1 00:20:03.467 }, 00:20:03.467 { 00:20:03.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.467 "dma_device_type": 2 00:20:03.467 }, 00:20:03.467 { 00:20:03.467 "dma_device_id": "system", 00:20:03.467 "dma_device_type": 1 00:20:03.467 }, 00:20:03.467 { 00:20:03.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.467 "dma_device_type": 2 00:20:03.467 } 00:20:03.467 ], 00:20:03.467 "driver_specific": { 00:20:03.467 "raid": { 00:20:03.467 "uuid": "ba736675-d270-475f-8db0-f480571fda20", 00:20:03.467 "strip_size_kb": 64, 00:20:03.467 "state": "online", 00:20:03.467 "raid_level": "concat", 00:20:03.467 "superblock": true, 00:20:03.467 "num_base_bdevs": 4, 00:20:03.467 "num_base_bdevs_discovered": 4, 00:20:03.467 "num_base_bdevs_operational": 4, 00:20:03.467 "base_bdevs_list": [ 00:20:03.467 { 00:20:03.467 "name": "NewBaseBdev", 00:20:03.467 "uuid": "197cd244-7906-4e40-b90a-f7ad68a5f734", 00:20:03.467 "is_configured": true, 00:20:03.467 "data_offset": 2048, 00:20:03.467 "data_size": 63488 00:20:03.467 }, 00:20:03.467 { 00:20:03.467 "name": "BaseBdev2", 00:20:03.467 "uuid": "c9b30731-9558-4157-a344-0506cd7d2e5d", 00:20:03.467 "is_configured": true, 00:20:03.467 "data_offset": 2048, 00:20:03.467 "data_size": 63488 00:20:03.467 }, 00:20:03.467 { 00:20:03.467 "name": "BaseBdev3", 00:20:03.467 "uuid": "75e4239c-1945-45f1-bbb5-49df7c11a1c2", 00:20:03.467 "is_configured": true, 00:20:03.467 "data_offset": 2048, 00:20:03.467 "data_size": 63488 00:20:03.467 }, 00:20:03.467 { 00:20:03.467 "name": "BaseBdev4", 00:20:03.467 "uuid": "aa5b621a-a90d-490c-af20-e583b69c58ae", 00:20:03.467 "is_configured": true, 00:20:03.467 "data_offset": 2048, 00:20:03.467 "data_size": 63488 00:20:03.467 } 00:20:03.467 ] 00:20:03.467 } 00:20:03.467 } 00:20:03.467 }' 00:20:03.467 18:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:03.467 18:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:03.467 BaseBdev2 00:20:03.467 BaseBdev3 00:20:03.467 BaseBdev4' 00:20:03.467 18:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:03.467 18:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:03.467 18:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:03.726 18:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:03.726 "name": "NewBaseBdev", 00:20:03.726 "aliases": [ 00:20:03.726 "197cd244-7906-4e40-b90a-f7ad68a5f734" 00:20:03.726 ], 00:20:03.726 "product_name": "Malloc disk", 00:20:03.726 "block_size": 512, 00:20:03.726 "num_blocks": 65536, 00:20:03.726 "uuid": "197cd244-7906-4e40-b90a-f7ad68a5f734", 00:20:03.726 "assigned_rate_limits": { 00:20:03.726 "rw_ios_per_sec": 0, 00:20:03.726 "rw_mbytes_per_sec": 0, 00:20:03.726 "r_mbytes_per_sec": 0, 00:20:03.726 "w_mbytes_per_sec": 0 00:20:03.726 }, 00:20:03.726 "claimed": true, 00:20:03.726 "claim_type": "exclusive_write", 00:20:03.726 "zoned": false, 00:20:03.726 "supported_io_types": { 00:20:03.726 "read": true, 00:20:03.726 "write": true, 00:20:03.726 "unmap": true, 00:20:03.726 "flush": true, 00:20:03.726 "reset": true, 00:20:03.726 "nvme_admin": false, 00:20:03.726 "nvme_io": false, 00:20:03.726 "nvme_io_md": false, 00:20:03.726 "write_zeroes": true, 00:20:03.726 "zcopy": true, 00:20:03.726 "get_zone_info": false, 00:20:03.726 "zone_management": false, 00:20:03.726 "zone_append": false, 00:20:03.726 "compare": false, 00:20:03.726 "compare_and_write": false, 00:20:03.726 "abort": true, 00:20:03.726 "seek_hole": false, 00:20:03.726 "seek_data": false, 00:20:03.726 "copy": true, 00:20:03.726 "nvme_iov_md": false 00:20:03.726 }, 00:20:03.726 "memory_domains": [ 00:20:03.726 { 00:20:03.726 "dma_device_id": "system", 00:20:03.726 "dma_device_type": 1 00:20:03.726 }, 00:20:03.726 { 00:20:03.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.726 "dma_device_type": 2 00:20:03.726 } 00:20:03.726 ], 00:20:03.726 "driver_specific": {} 00:20:03.726 }' 00:20:03.726 18:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:03.726 18:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:03.985 18:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:03.985 18:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:03.985 18:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:03.985 18:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:03.985 18:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:03.985 18:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:03.985 18:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:03.985 18:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:04.244 18:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:04.244 18:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:04.244 18:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:04.244 18:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:04.244 18:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:04.503 18:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:04.503 "name": "BaseBdev2", 00:20:04.503 "aliases": [ 00:20:04.503 "c9b30731-9558-4157-a344-0506cd7d2e5d" 00:20:04.503 ], 00:20:04.503 "product_name": "Malloc disk", 00:20:04.503 "block_size": 512, 00:20:04.503 "num_blocks": 65536, 00:20:04.503 "uuid": "c9b30731-9558-4157-a344-0506cd7d2e5d", 00:20:04.503 "assigned_rate_limits": { 00:20:04.503 "rw_ios_per_sec": 0, 00:20:04.503 "rw_mbytes_per_sec": 0, 00:20:04.503 "r_mbytes_per_sec": 0, 00:20:04.503 "w_mbytes_per_sec": 0 00:20:04.503 }, 00:20:04.503 "claimed": true, 00:20:04.503 "claim_type": "exclusive_write", 00:20:04.503 "zoned": false, 00:20:04.503 "supported_io_types": { 00:20:04.503 "read": true, 00:20:04.503 "write": true, 00:20:04.503 "unmap": true, 00:20:04.503 "flush": true, 00:20:04.503 "reset": true, 00:20:04.503 "nvme_admin": false, 00:20:04.503 "nvme_io": false, 00:20:04.503 "nvme_io_md": false, 00:20:04.503 "write_zeroes": true, 00:20:04.503 "zcopy": true, 00:20:04.503 "get_zone_info": false, 00:20:04.503 "zone_management": false, 00:20:04.503 "zone_append": false, 00:20:04.503 "compare": false, 00:20:04.503 "compare_and_write": false, 00:20:04.503 "abort": true, 00:20:04.503 "seek_hole": false, 00:20:04.503 "seek_data": false, 00:20:04.503 "copy": true, 00:20:04.503 "nvme_iov_md": false 00:20:04.503 }, 00:20:04.503 "memory_domains": [ 00:20:04.503 { 00:20:04.503 "dma_device_id": "system", 00:20:04.503 "dma_device_type": 1 00:20:04.503 }, 00:20:04.503 { 00:20:04.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:04.503 "dma_device_type": 2 00:20:04.503 } 00:20:04.503 ], 00:20:04.503 "driver_specific": {} 00:20:04.503 }' 00:20:04.503 18:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:04.503 18:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:04.503 18:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:04.503 18:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:04.503 18:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:04.503 18:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:04.503 18:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:04.762 18:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:04.762 18:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:04.762 18:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:04.762 18:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:04.762 18:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:04.762 18:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:04.762 18:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:04.762 18:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:05.021 18:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:05.021 "name": "BaseBdev3", 00:20:05.021 "aliases": [ 00:20:05.021 "75e4239c-1945-45f1-bbb5-49df7c11a1c2" 00:20:05.021 ], 00:20:05.021 "product_name": "Malloc disk", 00:20:05.021 "block_size": 512, 00:20:05.021 "num_blocks": 65536, 00:20:05.021 "uuid": "75e4239c-1945-45f1-bbb5-49df7c11a1c2", 00:20:05.021 "assigned_rate_limits": { 00:20:05.021 "rw_ios_per_sec": 0, 00:20:05.021 "rw_mbytes_per_sec": 0, 00:20:05.021 "r_mbytes_per_sec": 0, 00:20:05.021 "w_mbytes_per_sec": 0 00:20:05.021 }, 00:20:05.021 "claimed": true, 00:20:05.021 "claim_type": "exclusive_write", 00:20:05.021 "zoned": false, 00:20:05.021 "supported_io_types": { 00:20:05.021 "read": true, 00:20:05.021 "write": true, 00:20:05.021 "unmap": true, 00:20:05.021 "flush": true, 00:20:05.021 "reset": true, 00:20:05.021 "nvme_admin": false, 00:20:05.021 "nvme_io": false, 00:20:05.021 "nvme_io_md": false, 00:20:05.021 "write_zeroes": true, 00:20:05.021 "zcopy": true, 00:20:05.021 "get_zone_info": false, 00:20:05.021 "zone_management": false, 00:20:05.021 "zone_append": false, 00:20:05.021 "compare": false, 00:20:05.021 "compare_and_write": false, 00:20:05.021 "abort": true, 00:20:05.021 "seek_hole": false, 00:20:05.021 "seek_data": false, 00:20:05.021 "copy": true, 00:20:05.021 "nvme_iov_md": false 00:20:05.021 }, 00:20:05.021 "memory_domains": [ 00:20:05.021 { 00:20:05.021 "dma_device_id": "system", 00:20:05.021 "dma_device_type": 1 00:20:05.021 }, 00:20:05.021 { 00:20:05.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:05.021 "dma_device_type": 2 00:20:05.021 } 00:20:05.021 ], 00:20:05.021 "driver_specific": {} 00:20:05.021 }' 00:20:05.021 18:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:05.021 18:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:05.021 18:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:05.021 18:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:05.021 18:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:05.279 18:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:05.279 18:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:05.279 18:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:05.279 18:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:05.279 18:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:05.279 18:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:05.279 18:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:05.279 18:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:05.279 18:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:05.280 18:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:05.538 18:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:05.538 "name": "BaseBdev4", 00:20:05.538 "aliases": [ 00:20:05.538 "aa5b621a-a90d-490c-af20-e583b69c58ae" 00:20:05.538 ], 00:20:05.538 "product_name": "Malloc disk", 00:20:05.538 "block_size": 512, 00:20:05.538 "num_blocks": 65536, 00:20:05.538 "uuid": "aa5b621a-a90d-490c-af20-e583b69c58ae", 00:20:05.538 "assigned_rate_limits": { 00:20:05.538 "rw_ios_per_sec": 0, 00:20:05.538 "rw_mbytes_per_sec": 0, 00:20:05.538 "r_mbytes_per_sec": 0, 00:20:05.538 "w_mbytes_per_sec": 0 00:20:05.538 }, 00:20:05.538 "claimed": true, 00:20:05.538 "claim_type": "exclusive_write", 00:20:05.538 "zoned": false, 00:20:05.538 "supported_io_types": { 00:20:05.538 "read": true, 00:20:05.538 "write": true, 00:20:05.538 "unmap": true, 00:20:05.538 "flush": true, 00:20:05.538 "reset": true, 00:20:05.538 "nvme_admin": false, 00:20:05.538 "nvme_io": false, 00:20:05.538 "nvme_io_md": false, 00:20:05.538 "write_zeroes": true, 00:20:05.538 "zcopy": true, 00:20:05.538 "get_zone_info": false, 00:20:05.538 "zone_management": false, 00:20:05.538 "zone_append": false, 00:20:05.538 "compare": false, 00:20:05.538 "compare_and_write": false, 00:20:05.538 "abort": true, 00:20:05.538 "seek_hole": false, 00:20:05.538 "seek_data": false, 00:20:05.538 "copy": true, 00:20:05.538 "nvme_iov_md": false 00:20:05.538 }, 00:20:05.538 "memory_domains": [ 00:20:05.538 { 00:20:05.538 "dma_device_id": "system", 00:20:05.538 "dma_device_type": 1 00:20:05.538 }, 00:20:05.538 { 00:20:05.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:05.538 "dma_device_type": 2 00:20:05.538 } 00:20:05.538 ], 00:20:05.538 "driver_specific": {} 00:20:05.538 }' 00:20:05.538 18:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:05.538 18:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:05.797 18:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:05.797 18:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:05.797 18:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:05.797 18:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:05.797 18:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:05.797 18:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:05.797 18:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:05.797 18:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:06.055 18:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:06.055 18:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:06.055 18:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:06.055 [2024-07-15 18:34:51.545005] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:06.055 [2024-07-15 18:34:51.545036] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:06.055 [2024-07-15 18:34:51.545091] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:06.055 [2024-07-15 18:34:51.545150] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:06.055 [2024-07-15 18:34:51.545159] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1426670 name Existed_Raid, state offline 00:20:06.055 18:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2856249 00:20:06.055 18:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2856249 ']' 00:20:06.055 18:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2856249 00:20:06.055 18:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:20:06.055 18:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:06.055 18:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2856249 00:20:06.314 18:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:06.314 18:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:06.314 18:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2856249' 00:20:06.314 killing process with pid 2856249 00:20:06.314 18:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2856249 00:20:06.314 [2024-07-15 18:34:51.615774] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:06.314 18:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2856249 00:20:06.314 [2024-07-15 18:34:51.651353] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:06.314 18:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:20:06.314 00:20:06.314 real 0m35.207s 00:20:06.314 user 1m6.299s 00:20:06.314 sys 0m4.679s 00:20:06.314 18:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:06.314 18:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.314 ************************************ 00:20:06.314 END TEST raid_state_function_test_sb 00:20:06.314 ************************************ 00:20:06.573 18:34:51 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:06.573 18:34:51 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:20:06.573 18:34:51 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:20:06.573 18:34:51 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:06.573 18:34:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:06.573 ************************************ 00:20:06.573 START TEST raid_superblock_test 00:20:06.573 ************************************ 00:20:06.573 18:34:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 4 00:20:06.573 18:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:20:06.573 18:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:20:06.573 18:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:20:06.573 18:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:20:06.573 18:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:20:06.573 18:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:20:06.573 18:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:20:06.573 18:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:20:06.573 18:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:20:06.573 18:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:20:06.573 18:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:20:06.573 18:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:20:06.573 18:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:20:06.573 18:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:20:06.573 18:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:20:06.573 18:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:20:06.573 18:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2862420 00:20:06.573 18:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2862420 /var/tmp/spdk-raid.sock 00:20:06.573 18:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:06.573 18:34:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2862420 ']' 00:20:06.573 18:34:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:06.573 18:34:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:06.573 18:34:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:06.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:06.573 18:34:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:06.573 18:34:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.573 [2024-07-15 18:34:51.951920] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:20:06.574 [2024-07-15 18:34:51.951983] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2862420 ] 00:20:06.574 [2024-07-15 18:34:52.049589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.833 [2024-07-15 18:34:52.145005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.833 [2024-07-15 18:34:52.199686] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:06.833 [2024-07-15 18:34:52.199714] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:07.400 18:34:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:07.400 18:34:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:20:07.400 18:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:20:07.400 18:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:07.400 18:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:20:07.400 18:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:20:07.400 18:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:07.400 18:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:07.400 18:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:20:07.400 18:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:07.400 18:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:20:07.659 malloc1 00:20:07.659 18:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:07.917 [2024-07-15 18:34:53.400827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:07.917 [2024-07-15 18:34:53.400873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.917 [2024-07-15 18:34:53.400892] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8aee20 00:20:07.917 [2024-07-15 18:34:53.400900] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.917 [2024-07-15 18:34:53.402643] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.917 [2024-07-15 18:34:53.402673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:07.917 pt1 00:20:07.917 18:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:20:07.917 18:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:07.917 18:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:20:07.917 18:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:20:07.917 18:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:07.917 18:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:07.917 18:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:20:07.917 18:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:07.918 18:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:20:08.175 malloc2 00:20:08.175 18:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:08.433 [2024-07-15 18:34:53.906844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:08.433 [2024-07-15 18:34:53.906888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.433 [2024-07-15 18:34:53.906902] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa58ed0 00:20:08.433 [2024-07-15 18:34:53.906911] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.433 [2024-07-15 18:34:53.908504] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.433 [2024-07-15 18:34:53.908531] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:08.433 pt2 00:20:08.433 18:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:20:08.433 18:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:08.433 18:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:20:08.433 18:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:20:08.433 18:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:08.433 18:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:08.433 18:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:20:08.433 18:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:08.433 18:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:20:08.691 malloc3 00:20:08.691 18:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:08.950 [2024-07-15 18:34:54.424906] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:08.950 [2024-07-15 18:34:54.424963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.950 [2024-07-15 18:34:54.424981] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa5ca30 00:20:08.950 [2024-07-15 18:34:54.424990] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.950 [2024-07-15 18:34:54.426619] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.950 [2024-07-15 18:34:54.426646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:08.950 pt3 00:20:08.950 18:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:20:08.950 18:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:08.950 18:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:20:08.950 18:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:20:08.950 18:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:20:08.950 18:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:08.950 18:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:20:08.950 18:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:08.950 18:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:20:09.208 malloc4 00:20:09.208 18:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:09.467 [2024-07-15 18:34:54.930846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:09.467 [2024-07-15 18:34:54.930893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.467 [2024-07-15 18:34:54.930908] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa59900 00:20:09.467 [2024-07-15 18:34:54.930918] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.467 [2024-07-15 18:34:54.932522] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.467 [2024-07-15 18:34:54.932550] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:09.467 pt4 00:20:09.467 18:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:20:09.467 18:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:09.467 18:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:20:09.726 [2024-07-15 18:34:55.187546] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:09.726 [2024-07-15 18:34:55.188915] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:09.726 [2024-07-15 18:34:55.188981] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:09.726 [2024-07-15 18:34:55.189027] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:09.726 [2024-07-15 18:34:55.189203] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xa5cd40 00:20:09.726 [2024-07-15 18:34:55.189213] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:09.726 [2024-07-15 18:34:55.189419] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa61140 00:20:09.726 [2024-07-15 18:34:55.189569] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xa5cd40 00:20:09.726 [2024-07-15 18:34:55.189578] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xa5cd40 00:20:09.726 [2024-07-15 18:34:55.189679] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:09.726 18:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:20:09.726 18:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:09.726 18:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:09.726 18:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:09.726 18:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:09.726 18:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:09.726 18:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:09.726 18:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:09.726 18:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:09.726 18:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:09.726 18:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.726 18:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.984 18:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:09.984 "name": "raid_bdev1", 00:20:09.984 "uuid": "2cd12479-5a78-40db-a6e1-905558d2f6a5", 00:20:09.984 "strip_size_kb": 64, 00:20:09.984 "state": "online", 00:20:09.984 "raid_level": "concat", 00:20:09.984 "superblock": true, 00:20:09.984 "num_base_bdevs": 4, 00:20:09.984 "num_base_bdevs_discovered": 4, 00:20:09.984 "num_base_bdevs_operational": 4, 00:20:09.984 "base_bdevs_list": [ 00:20:09.984 { 00:20:09.984 "name": "pt1", 00:20:09.984 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:09.984 "is_configured": true, 00:20:09.984 "data_offset": 2048, 00:20:09.984 "data_size": 63488 00:20:09.984 }, 00:20:09.984 { 00:20:09.984 "name": "pt2", 00:20:09.984 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:09.984 "is_configured": true, 00:20:09.984 "data_offset": 2048, 00:20:09.984 "data_size": 63488 00:20:09.984 }, 00:20:09.984 { 00:20:09.984 "name": "pt3", 00:20:09.984 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:09.984 "is_configured": true, 00:20:09.984 "data_offset": 2048, 00:20:09.984 "data_size": 63488 00:20:09.984 }, 00:20:09.984 { 00:20:09.984 "name": "pt4", 00:20:09.984 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:09.984 "is_configured": true, 00:20:09.984 "data_offset": 2048, 00:20:09.984 "data_size": 63488 00:20:09.984 } 00:20:09.984 ] 00:20:09.984 }' 00:20:09.984 18:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:09.984 18:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.918 18:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:20:10.918 18:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:10.918 18:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:10.918 18:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:10.918 18:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:10.918 18:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:10.918 18:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:10.918 18:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:10.918 [2024-07-15 18:34:56.354999] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:10.918 18:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:10.918 "name": "raid_bdev1", 00:20:10.918 "aliases": [ 00:20:10.918 "2cd12479-5a78-40db-a6e1-905558d2f6a5" 00:20:10.918 ], 00:20:10.918 "product_name": "Raid Volume", 00:20:10.918 "block_size": 512, 00:20:10.918 "num_blocks": 253952, 00:20:10.918 "uuid": "2cd12479-5a78-40db-a6e1-905558d2f6a5", 00:20:10.918 "assigned_rate_limits": { 00:20:10.918 "rw_ios_per_sec": 0, 00:20:10.918 "rw_mbytes_per_sec": 0, 00:20:10.918 "r_mbytes_per_sec": 0, 00:20:10.918 "w_mbytes_per_sec": 0 00:20:10.918 }, 00:20:10.918 "claimed": false, 00:20:10.918 "zoned": false, 00:20:10.918 "supported_io_types": { 00:20:10.918 "read": true, 00:20:10.918 "write": true, 00:20:10.918 "unmap": true, 00:20:10.918 "flush": true, 00:20:10.918 "reset": true, 00:20:10.918 "nvme_admin": false, 00:20:10.918 "nvme_io": false, 00:20:10.918 "nvme_io_md": false, 00:20:10.918 "write_zeroes": true, 00:20:10.918 "zcopy": false, 00:20:10.918 "get_zone_info": false, 00:20:10.918 "zone_management": false, 00:20:10.918 "zone_append": false, 00:20:10.918 "compare": false, 00:20:10.918 "compare_and_write": false, 00:20:10.918 "abort": false, 00:20:10.918 "seek_hole": false, 00:20:10.918 "seek_data": false, 00:20:10.918 "copy": false, 00:20:10.918 "nvme_iov_md": false 00:20:10.918 }, 00:20:10.918 "memory_domains": [ 00:20:10.918 { 00:20:10.918 "dma_device_id": "system", 00:20:10.918 "dma_device_type": 1 00:20:10.918 }, 00:20:10.918 { 00:20:10.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.918 "dma_device_type": 2 00:20:10.918 }, 00:20:10.918 { 00:20:10.918 "dma_device_id": "system", 00:20:10.918 "dma_device_type": 1 00:20:10.918 }, 00:20:10.918 { 00:20:10.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.918 "dma_device_type": 2 00:20:10.918 }, 00:20:10.918 { 00:20:10.918 "dma_device_id": "system", 00:20:10.919 "dma_device_type": 1 00:20:10.919 }, 00:20:10.919 { 00:20:10.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.919 "dma_device_type": 2 00:20:10.919 }, 00:20:10.919 { 00:20:10.919 "dma_device_id": "system", 00:20:10.919 "dma_device_type": 1 00:20:10.919 }, 00:20:10.919 { 00:20:10.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.919 "dma_device_type": 2 00:20:10.919 } 00:20:10.919 ], 00:20:10.919 "driver_specific": { 00:20:10.919 "raid": { 00:20:10.919 "uuid": "2cd12479-5a78-40db-a6e1-905558d2f6a5", 00:20:10.919 "strip_size_kb": 64, 00:20:10.919 "state": "online", 00:20:10.919 "raid_level": "concat", 00:20:10.919 "superblock": true, 00:20:10.919 "num_base_bdevs": 4, 00:20:10.919 "num_base_bdevs_discovered": 4, 00:20:10.919 "num_base_bdevs_operational": 4, 00:20:10.919 "base_bdevs_list": [ 00:20:10.919 { 00:20:10.919 "name": "pt1", 00:20:10.919 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:10.919 "is_configured": true, 00:20:10.919 "data_offset": 2048, 00:20:10.919 "data_size": 63488 00:20:10.919 }, 00:20:10.919 { 00:20:10.919 "name": "pt2", 00:20:10.919 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:10.919 "is_configured": true, 00:20:10.919 "data_offset": 2048, 00:20:10.919 "data_size": 63488 00:20:10.919 }, 00:20:10.919 { 00:20:10.919 "name": "pt3", 00:20:10.919 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:10.919 "is_configured": true, 00:20:10.919 "data_offset": 2048, 00:20:10.919 "data_size": 63488 00:20:10.919 }, 00:20:10.919 { 00:20:10.919 "name": "pt4", 00:20:10.919 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:10.919 "is_configured": true, 00:20:10.919 "data_offset": 2048, 00:20:10.919 "data_size": 63488 00:20:10.919 } 00:20:10.919 ] 00:20:10.919 } 00:20:10.919 } 00:20:10.919 }' 00:20:10.919 18:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:10.919 18:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:10.919 pt2 00:20:10.919 pt3 00:20:10.919 pt4' 00:20:10.919 18:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:10.919 18:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:10.919 18:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:11.177 18:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:11.177 "name": "pt1", 00:20:11.177 "aliases": [ 00:20:11.177 "00000000-0000-0000-0000-000000000001" 00:20:11.177 ], 00:20:11.177 "product_name": "passthru", 00:20:11.177 "block_size": 512, 00:20:11.177 "num_blocks": 65536, 00:20:11.177 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:11.177 "assigned_rate_limits": { 00:20:11.177 "rw_ios_per_sec": 0, 00:20:11.177 "rw_mbytes_per_sec": 0, 00:20:11.177 "r_mbytes_per_sec": 0, 00:20:11.177 "w_mbytes_per_sec": 0 00:20:11.177 }, 00:20:11.177 "claimed": true, 00:20:11.177 "claim_type": "exclusive_write", 00:20:11.177 "zoned": false, 00:20:11.177 "supported_io_types": { 00:20:11.177 "read": true, 00:20:11.177 "write": true, 00:20:11.177 "unmap": true, 00:20:11.177 "flush": true, 00:20:11.177 "reset": true, 00:20:11.177 "nvme_admin": false, 00:20:11.177 "nvme_io": false, 00:20:11.177 "nvme_io_md": false, 00:20:11.177 "write_zeroes": true, 00:20:11.177 "zcopy": true, 00:20:11.177 "get_zone_info": false, 00:20:11.177 "zone_management": false, 00:20:11.177 "zone_append": false, 00:20:11.177 "compare": false, 00:20:11.177 "compare_and_write": false, 00:20:11.177 "abort": true, 00:20:11.177 "seek_hole": false, 00:20:11.177 "seek_data": false, 00:20:11.177 "copy": true, 00:20:11.177 "nvme_iov_md": false 00:20:11.177 }, 00:20:11.177 "memory_domains": [ 00:20:11.177 { 00:20:11.177 "dma_device_id": "system", 00:20:11.177 "dma_device_type": 1 00:20:11.177 }, 00:20:11.177 { 00:20:11.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.177 "dma_device_type": 2 00:20:11.177 } 00:20:11.177 ], 00:20:11.177 "driver_specific": { 00:20:11.177 "passthru": { 00:20:11.177 "name": "pt1", 00:20:11.177 "base_bdev_name": "malloc1" 00:20:11.177 } 00:20:11.177 } 00:20:11.177 }' 00:20:11.177 18:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:11.436 18:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:11.436 18:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:11.436 18:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:11.436 18:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:11.436 18:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:11.436 18:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:11.436 18:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:11.436 18:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:11.436 18:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:11.694 18:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:11.694 18:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:11.694 18:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:11.694 18:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:11.694 18:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:11.952 18:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:11.952 "name": "pt2", 00:20:11.952 "aliases": [ 00:20:11.952 "00000000-0000-0000-0000-000000000002" 00:20:11.952 ], 00:20:11.952 "product_name": "passthru", 00:20:11.952 "block_size": 512, 00:20:11.952 "num_blocks": 65536, 00:20:11.952 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:11.952 "assigned_rate_limits": { 00:20:11.952 "rw_ios_per_sec": 0, 00:20:11.952 "rw_mbytes_per_sec": 0, 00:20:11.952 "r_mbytes_per_sec": 0, 00:20:11.952 "w_mbytes_per_sec": 0 00:20:11.952 }, 00:20:11.952 "claimed": true, 00:20:11.952 "claim_type": "exclusive_write", 00:20:11.952 "zoned": false, 00:20:11.952 "supported_io_types": { 00:20:11.952 "read": true, 00:20:11.952 "write": true, 00:20:11.953 "unmap": true, 00:20:11.953 "flush": true, 00:20:11.953 "reset": true, 00:20:11.953 "nvme_admin": false, 00:20:11.953 "nvme_io": false, 00:20:11.953 "nvme_io_md": false, 00:20:11.953 "write_zeroes": true, 00:20:11.953 "zcopy": true, 00:20:11.953 "get_zone_info": false, 00:20:11.953 "zone_management": false, 00:20:11.953 "zone_append": false, 00:20:11.953 "compare": false, 00:20:11.953 "compare_and_write": false, 00:20:11.953 "abort": true, 00:20:11.953 "seek_hole": false, 00:20:11.953 "seek_data": false, 00:20:11.953 "copy": true, 00:20:11.953 "nvme_iov_md": false 00:20:11.953 }, 00:20:11.953 "memory_domains": [ 00:20:11.953 { 00:20:11.953 "dma_device_id": "system", 00:20:11.953 "dma_device_type": 1 00:20:11.953 }, 00:20:11.953 { 00:20:11.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.953 "dma_device_type": 2 00:20:11.953 } 00:20:11.953 ], 00:20:11.953 "driver_specific": { 00:20:11.953 "passthru": { 00:20:11.953 "name": "pt2", 00:20:11.953 "base_bdev_name": "malloc2" 00:20:11.953 } 00:20:11.953 } 00:20:11.953 }' 00:20:11.953 18:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:11.953 18:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:11.953 18:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:11.953 18:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:11.953 18:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:12.212 18:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:12.212 18:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:12.212 18:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:12.212 18:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:12.212 18:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:12.212 18:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:12.212 18:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:12.212 18:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:12.212 18:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:12.212 18:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:12.779 18:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:12.779 "name": "pt3", 00:20:12.779 "aliases": [ 00:20:12.779 "00000000-0000-0000-0000-000000000003" 00:20:12.779 ], 00:20:12.779 "product_name": "passthru", 00:20:12.779 "block_size": 512, 00:20:12.779 "num_blocks": 65536, 00:20:12.779 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:12.779 "assigned_rate_limits": { 00:20:12.779 "rw_ios_per_sec": 0, 00:20:12.779 "rw_mbytes_per_sec": 0, 00:20:12.779 "r_mbytes_per_sec": 0, 00:20:12.779 "w_mbytes_per_sec": 0 00:20:12.779 }, 00:20:12.780 "claimed": true, 00:20:12.780 "claim_type": "exclusive_write", 00:20:12.780 "zoned": false, 00:20:12.780 "supported_io_types": { 00:20:12.780 "read": true, 00:20:12.780 "write": true, 00:20:12.780 "unmap": true, 00:20:12.780 "flush": true, 00:20:12.780 "reset": true, 00:20:12.780 "nvme_admin": false, 00:20:12.780 "nvme_io": false, 00:20:12.780 "nvme_io_md": false, 00:20:12.780 "write_zeroes": true, 00:20:12.780 "zcopy": true, 00:20:12.780 "get_zone_info": false, 00:20:12.780 "zone_management": false, 00:20:12.780 "zone_append": false, 00:20:12.780 "compare": false, 00:20:12.780 "compare_and_write": false, 00:20:12.780 "abort": true, 00:20:12.780 "seek_hole": false, 00:20:12.780 "seek_data": false, 00:20:12.780 "copy": true, 00:20:12.780 "nvme_iov_md": false 00:20:12.780 }, 00:20:12.780 "memory_domains": [ 00:20:12.780 { 00:20:12.780 "dma_device_id": "system", 00:20:12.780 "dma_device_type": 1 00:20:12.780 }, 00:20:12.780 { 00:20:12.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:12.780 "dma_device_type": 2 00:20:12.780 } 00:20:12.780 ], 00:20:12.780 "driver_specific": { 00:20:12.780 "passthru": { 00:20:12.780 "name": "pt3", 00:20:12.780 "base_bdev_name": "malloc3" 00:20:12.780 } 00:20:12.780 } 00:20:12.780 }' 00:20:12.780 18:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:12.780 18:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:12.780 18:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:12.780 18:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:12.780 18:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:12.780 18:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:12.780 18:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:12.780 18:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:13.038 18:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:13.038 18:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:13.038 18:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:13.038 18:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:13.038 18:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:13.038 18:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:20:13.038 18:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:13.296 18:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:13.296 "name": "pt4", 00:20:13.296 "aliases": [ 00:20:13.296 "00000000-0000-0000-0000-000000000004" 00:20:13.296 ], 00:20:13.296 "product_name": "passthru", 00:20:13.296 "block_size": 512, 00:20:13.296 "num_blocks": 65536, 00:20:13.296 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:13.296 "assigned_rate_limits": { 00:20:13.296 "rw_ios_per_sec": 0, 00:20:13.296 "rw_mbytes_per_sec": 0, 00:20:13.296 "r_mbytes_per_sec": 0, 00:20:13.296 "w_mbytes_per_sec": 0 00:20:13.296 }, 00:20:13.296 "claimed": true, 00:20:13.296 "claim_type": "exclusive_write", 00:20:13.296 "zoned": false, 00:20:13.296 "supported_io_types": { 00:20:13.296 "read": true, 00:20:13.296 "write": true, 00:20:13.296 "unmap": true, 00:20:13.296 "flush": true, 00:20:13.296 "reset": true, 00:20:13.296 "nvme_admin": false, 00:20:13.296 "nvme_io": false, 00:20:13.296 "nvme_io_md": false, 00:20:13.296 "write_zeroes": true, 00:20:13.296 "zcopy": true, 00:20:13.296 "get_zone_info": false, 00:20:13.296 "zone_management": false, 00:20:13.296 "zone_append": false, 00:20:13.296 "compare": false, 00:20:13.296 "compare_and_write": false, 00:20:13.296 "abort": true, 00:20:13.296 "seek_hole": false, 00:20:13.296 "seek_data": false, 00:20:13.296 "copy": true, 00:20:13.296 "nvme_iov_md": false 00:20:13.296 }, 00:20:13.296 "memory_domains": [ 00:20:13.296 { 00:20:13.296 "dma_device_id": "system", 00:20:13.296 "dma_device_type": 1 00:20:13.296 }, 00:20:13.296 { 00:20:13.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.296 "dma_device_type": 2 00:20:13.296 } 00:20:13.296 ], 00:20:13.296 "driver_specific": { 00:20:13.296 "passthru": { 00:20:13.296 "name": "pt4", 00:20:13.296 "base_bdev_name": "malloc4" 00:20:13.296 } 00:20:13.296 } 00:20:13.296 }' 00:20:13.296 18:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:13.296 18:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:13.554 18:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:13.554 18:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:13.554 18:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:13.554 18:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:13.554 18:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:13.554 18:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:13.554 18:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:13.554 18:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:13.554 18:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:13.813 18:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:13.813 18:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:13.813 18:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:20:14.071 [2024-07-15 18:34:59.383171] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:14.071 18:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=2cd12479-5a78-40db-a6e1-905558d2f6a5 00:20:14.071 18:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 2cd12479-5a78-40db-a6e1-905558d2f6a5 ']' 00:20:14.071 18:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:14.330 [2024-07-15 18:34:59.647531] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:14.330 [2024-07-15 18:34:59.647549] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:14.330 [2024-07-15 18:34:59.647594] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:14.330 [2024-07-15 18:34:59.647658] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:14.330 [2024-07-15 18:34:59.647667] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa5cd40 name raid_bdev1, state offline 00:20:14.330 18:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.330 18:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:20:14.588 18:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:20:14.588 18:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:20:14.588 18:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:14.588 18:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:14.847 18:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:14.847 18:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:15.105 18:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:15.105 18:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:15.364 18:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:15.364 18:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:15.622 18:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:15.622 18:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:15.881 18:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:20:15.881 18:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:15.881 18:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:20:15.881 18:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:15.881 18:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:15.881 18:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:15.881 18:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:15.881 18:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:15.881 18:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:15.881 18:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:15.881 18:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:15.881 18:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:20:15.881 18:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:16.140 [2024-07-15 18:35:01.516474] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:16.140 [2024-07-15 18:35:01.517890] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:16.140 [2024-07-15 18:35:01.517934] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:16.140 [2024-07-15 18:35:01.517982] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:20:16.140 [2024-07-15 18:35:01.518028] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:16.140 [2024-07-15 18:35:01.518064] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:16.140 [2024-07-15 18:35:01.518084] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:16.140 [2024-07-15 18:35:01.518103] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:20:16.140 [2024-07-15 18:35:01.518118] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:16.140 [2024-07-15 18:35:01.518126] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa59100 name raid_bdev1, state configuring 00:20:16.140 request: 00:20:16.140 { 00:20:16.140 "name": "raid_bdev1", 00:20:16.140 "raid_level": "concat", 00:20:16.140 "base_bdevs": [ 00:20:16.140 "malloc1", 00:20:16.140 "malloc2", 00:20:16.140 "malloc3", 00:20:16.140 "malloc4" 00:20:16.140 ], 00:20:16.140 "strip_size_kb": 64, 00:20:16.140 "superblock": false, 00:20:16.140 "method": "bdev_raid_create", 00:20:16.140 "req_id": 1 00:20:16.140 } 00:20:16.140 Got JSON-RPC error response 00:20:16.140 response: 00:20:16.140 { 00:20:16.140 "code": -17, 00:20:16.140 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:16.140 } 00:20:16.140 18:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:20:16.140 18:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:16.140 18:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:16.140 18:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:16.140 18:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.140 18:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:20:16.398 18:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:20:16.398 18:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:20:16.398 18:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:16.656 [2024-07-15 18:35:02.013730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:16.656 [2024-07-15 18:35:02.013772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.657 [2024-07-15 18:35:02.013790] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa5b980 00:20:16.657 [2024-07-15 18:35:02.013799] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.657 [2024-07-15 18:35:02.015457] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.657 [2024-07-15 18:35:02.015486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:16.657 [2024-07-15 18:35:02.015548] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:16.657 [2024-07-15 18:35:02.015573] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:16.657 pt1 00:20:16.657 18:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:20:16.657 18:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:16.657 18:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:16.657 18:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:16.657 18:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:16.657 18:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:16.657 18:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:16.657 18:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:16.657 18:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:16.657 18:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:16.657 18:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.657 18:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.915 18:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:16.915 "name": "raid_bdev1", 00:20:16.915 "uuid": "2cd12479-5a78-40db-a6e1-905558d2f6a5", 00:20:16.915 "strip_size_kb": 64, 00:20:16.915 "state": "configuring", 00:20:16.915 "raid_level": "concat", 00:20:16.915 "superblock": true, 00:20:16.915 "num_base_bdevs": 4, 00:20:16.915 "num_base_bdevs_discovered": 1, 00:20:16.915 "num_base_bdevs_operational": 4, 00:20:16.915 "base_bdevs_list": [ 00:20:16.915 { 00:20:16.915 "name": "pt1", 00:20:16.915 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:16.915 "is_configured": true, 00:20:16.915 "data_offset": 2048, 00:20:16.915 "data_size": 63488 00:20:16.915 }, 00:20:16.915 { 00:20:16.915 "name": null, 00:20:16.915 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:16.915 "is_configured": false, 00:20:16.915 "data_offset": 2048, 00:20:16.915 "data_size": 63488 00:20:16.915 }, 00:20:16.915 { 00:20:16.915 "name": null, 00:20:16.915 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:16.915 "is_configured": false, 00:20:16.915 "data_offset": 2048, 00:20:16.915 "data_size": 63488 00:20:16.915 }, 00:20:16.915 { 00:20:16.915 "name": null, 00:20:16.915 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:16.915 "is_configured": false, 00:20:16.915 "data_offset": 2048, 00:20:16.915 "data_size": 63488 00:20:16.915 } 00:20:16.915 ] 00:20:16.915 }' 00:20:16.915 18:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:16.915 18:35:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.482 18:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:20:17.482 18:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:17.741 [2024-07-15 18:35:03.225014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:17.741 [2024-07-15 18:35:03.225061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:17.741 [2024-07-15 18:35:03.225080] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa5bc80 00:20:17.741 [2024-07-15 18:35:03.225090] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:17.741 [2024-07-15 18:35:03.225424] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:17.741 [2024-07-15 18:35:03.225440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:17.741 [2024-07-15 18:35:03.225497] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:17.741 [2024-07-15 18:35:03.225515] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:17.741 pt2 00:20:17.741 18:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:18.000 [2024-07-15 18:35:03.485718] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:18.000 18:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:20:18.000 18:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:18.000 18:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:18.000 18:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:18.000 18:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:18.000 18:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:18.000 18:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:18.000 18:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:18.000 18:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:18.000 18:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:18.000 18:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.000 18:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.258 18:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:18.258 "name": "raid_bdev1", 00:20:18.258 "uuid": "2cd12479-5a78-40db-a6e1-905558d2f6a5", 00:20:18.258 "strip_size_kb": 64, 00:20:18.258 "state": "configuring", 00:20:18.258 "raid_level": "concat", 00:20:18.258 "superblock": true, 00:20:18.258 "num_base_bdevs": 4, 00:20:18.258 "num_base_bdevs_discovered": 1, 00:20:18.258 "num_base_bdevs_operational": 4, 00:20:18.258 "base_bdevs_list": [ 00:20:18.258 { 00:20:18.258 "name": "pt1", 00:20:18.258 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:18.258 "is_configured": true, 00:20:18.258 "data_offset": 2048, 00:20:18.258 "data_size": 63488 00:20:18.258 }, 00:20:18.258 { 00:20:18.258 "name": null, 00:20:18.258 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:18.258 "is_configured": false, 00:20:18.258 "data_offset": 2048, 00:20:18.258 "data_size": 63488 00:20:18.258 }, 00:20:18.258 { 00:20:18.258 "name": null, 00:20:18.258 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:18.258 "is_configured": false, 00:20:18.258 "data_offset": 2048, 00:20:18.258 "data_size": 63488 00:20:18.258 }, 00:20:18.258 { 00:20:18.258 "name": null, 00:20:18.258 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:18.258 "is_configured": false, 00:20:18.258 "data_offset": 2048, 00:20:18.258 "data_size": 63488 00:20:18.258 } 00:20:18.258 ] 00:20:18.258 }' 00:20:18.258 18:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:18.258 18:35:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.225 18:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:20:19.225 18:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:19.225 18:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:19.225 [2024-07-15 18:35:04.697123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:19.225 [2024-07-15 18:35:04.697171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:19.225 [2024-07-15 18:35:04.697191] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa5beb0 00:20:19.225 [2024-07-15 18:35:04.697200] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:19.225 [2024-07-15 18:35:04.697530] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:19.225 [2024-07-15 18:35:04.697548] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:19.225 [2024-07-15 18:35:04.697607] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:19.225 [2024-07-15 18:35:04.697624] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:19.225 pt2 00:20:19.225 18:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:20:19.225 18:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:19.225 18:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:19.483 [2024-07-15 18:35:04.957817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:19.483 [2024-07-15 18:35:04.957858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:19.483 [2024-07-15 18:35:04.957872] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8adda0 00:20:19.483 [2024-07-15 18:35:04.957881] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:19.483 [2024-07-15 18:35:04.958182] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:19.483 [2024-07-15 18:35:04.958198] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:19.483 [2024-07-15 18:35:04.958248] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:19.483 [2024-07-15 18:35:04.958265] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:19.483 pt3 00:20:19.483 18:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:20:19.483 18:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:19.483 18:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:19.740 [2024-07-15 18:35:05.218515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:19.740 [2024-07-15 18:35:05.218552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:19.740 [2024-07-15 18:35:05.218566] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa5eed0 00:20:19.740 [2024-07-15 18:35:05.218576] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:19.740 [2024-07-15 18:35:05.218865] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:19.740 [2024-07-15 18:35:05.218881] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:19.740 [2024-07-15 18:35:05.218929] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:20:19.740 [2024-07-15 18:35:05.218945] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:19.740 [2024-07-15 18:35:05.219077] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xa5b3c0 00:20:19.740 [2024-07-15 18:35:05.219086] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:19.740 [2024-07-15 18:35:05.219260] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa5b890 00:20:19.740 [2024-07-15 18:35:05.219392] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xa5b3c0 00:20:19.740 [2024-07-15 18:35:05.219400] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xa5b3c0 00:20:19.740 [2024-07-15 18:35:05.219498] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:19.740 pt4 00:20:19.740 18:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:20:19.740 18:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:19.740 18:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:20:19.740 18:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:19.740 18:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:19.740 18:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:19.740 18:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:19.740 18:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:19.740 18:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:19.740 18:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:19.740 18:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:19.741 18:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:19.741 18:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.741 18:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.998 18:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:19.998 "name": "raid_bdev1", 00:20:19.998 "uuid": "2cd12479-5a78-40db-a6e1-905558d2f6a5", 00:20:19.998 "strip_size_kb": 64, 00:20:19.998 "state": "online", 00:20:19.998 "raid_level": "concat", 00:20:19.998 "superblock": true, 00:20:19.998 "num_base_bdevs": 4, 00:20:19.998 "num_base_bdevs_discovered": 4, 00:20:19.998 "num_base_bdevs_operational": 4, 00:20:19.998 "base_bdevs_list": [ 00:20:19.998 { 00:20:19.998 "name": "pt1", 00:20:19.998 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:19.998 "is_configured": true, 00:20:19.998 "data_offset": 2048, 00:20:19.998 "data_size": 63488 00:20:19.998 }, 00:20:19.998 { 00:20:19.998 "name": "pt2", 00:20:19.998 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:19.998 "is_configured": true, 00:20:19.998 "data_offset": 2048, 00:20:19.998 "data_size": 63488 00:20:19.998 }, 00:20:19.998 { 00:20:19.998 "name": "pt3", 00:20:19.998 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:19.998 "is_configured": true, 00:20:19.998 "data_offset": 2048, 00:20:19.998 "data_size": 63488 00:20:19.998 }, 00:20:19.998 { 00:20:19.998 "name": "pt4", 00:20:19.998 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:19.998 "is_configured": true, 00:20:19.998 "data_offset": 2048, 00:20:19.998 "data_size": 63488 00:20:19.998 } 00:20:19.998 ] 00:20:19.998 }' 00:20:19.998 18:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:19.998 18:35:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.973 18:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:20:20.973 18:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:20.973 18:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:20.973 18:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:20.973 18:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:20.973 18:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:20.973 18:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:20.973 18:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:20.973 [2024-07-15 18:35:06.381979] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:20.973 18:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:20.973 "name": "raid_bdev1", 00:20:20.973 "aliases": [ 00:20:20.973 "2cd12479-5a78-40db-a6e1-905558d2f6a5" 00:20:20.973 ], 00:20:20.973 "product_name": "Raid Volume", 00:20:20.973 "block_size": 512, 00:20:20.973 "num_blocks": 253952, 00:20:20.973 "uuid": "2cd12479-5a78-40db-a6e1-905558d2f6a5", 00:20:20.973 "assigned_rate_limits": { 00:20:20.973 "rw_ios_per_sec": 0, 00:20:20.973 "rw_mbytes_per_sec": 0, 00:20:20.973 "r_mbytes_per_sec": 0, 00:20:20.973 "w_mbytes_per_sec": 0 00:20:20.973 }, 00:20:20.973 "claimed": false, 00:20:20.973 "zoned": false, 00:20:20.973 "supported_io_types": { 00:20:20.973 "read": true, 00:20:20.973 "write": true, 00:20:20.973 "unmap": true, 00:20:20.973 "flush": true, 00:20:20.973 "reset": true, 00:20:20.973 "nvme_admin": false, 00:20:20.973 "nvme_io": false, 00:20:20.973 "nvme_io_md": false, 00:20:20.973 "write_zeroes": true, 00:20:20.973 "zcopy": false, 00:20:20.973 "get_zone_info": false, 00:20:20.973 "zone_management": false, 00:20:20.973 "zone_append": false, 00:20:20.973 "compare": false, 00:20:20.973 "compare_and_write": false, 00:20:20.973 "abort": false, 00:20:20.973 "seek_hole": false, 00:20:20.973 "seek_data": false, 00:20:20.973 "copy": false, 00:20:20.973 "nvme_iov_md": false 00:20:20.973 }, 00:20:20.973 "memory_domains": [ 00:20:20.973 { 00:20:20.973 "dma_device_id": "system", 00:20:20.973 "dma_device_type": 1 00:20:20.973 }, 00:20:20.973 { 00:20:20.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:20.973 "dma_device_type": 2 00:20:20.973 }, 00:20:20.973 { 00:20:20.973 "dma_device_id": "system", 00:20:20.973 "dma_device_type": 1 00:20:20.973 }, 00:20:20.973 { 00:20:20.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:20.973 "dma_device_type": 2 00:20:20.973 }, 00:20:20.973 { 00:20:20.973 "dma_device_id": "system", 00:20:20.973 "dma_device_type": 1 00:20:20.973 }, 00:20:20.973 { 00:20:20.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:20.973 "dma_device_type": 2 00:20:20.973 }, 00:20:20.973 { 00:20:20.973 "dma_device_id": "system", 00:20:20.973 "dma_device_type": 1 00:20:20.973 }, 00:20:20.973 { 00:20:20.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:20.973 "dma_device_type": 2 00:20:20.973 } 00:20:20.973 ], 00:20:20.973 "driver_specific": { 00:20:20.973 "raid": { 00:20:20.973 "uuid": "2cd12479-5a78-40db-a6e1-905558d2f6a5", 00:20:20.973 "strip_size_kb": 64, 00:20:20.973 "state": "online", 00:20:20.973 "raid_level": "concat", 00:20:20.973 "superblock": true, 00:20:20.973 "num_base_bdevs": 4, 00:20:20.973 "num_base_bdevs_discovered": 4, 00:20:20.973 "num_base_bdevs_operational": 4, 00:20:20.973 "base_bdevs_list": [ 00:20:20.973 { 00:20:20.973 "name": "pt1", 00:20:20.973 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:20.973 "is_configured": true, 00:20:20.973 "data_offset": 2048, 00:20:20.973 "data_size": 63488 00:20:20.973 }, 00:20:20.973 { 00:20:20.973 "name": "pt2", 00:20:20.973 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:20.973 "is_configured": true, 00:20:20.973 "data_offset": 2048, 00:20:20.973 "data_size": 63488 00:20:20.973 }, 00:20:20.973 { 00:20:20.973 "name": "pt3", 00:20:20.973 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:20.973 "is_configured": true, 00:20:20.973 "data_offset": 2048, 00:20:20.973 "data_size": 63488 00:20:20.973 }, 00:20:20.973 { 00:20:20.973 "name": "pt4", 00:20:20.973 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:20.973 "is_configured": true, 00:20:20.973 "data_offset": 2048, 00:20:20.973 "data_size": 63488 00:20:20.973 } 00:20:20.973 ] 00:20:20.973 } 00:20:20.973 } 00:20:20.973 }' 00:20:20.973 18:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:20.973 18:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:20.973 pt2 00:20:20.973 pt3 00:20:20.973 pt4' 00:20:20.973 18:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:20.973 18:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:20.973 18:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:21.230 18:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:21.230 "name": "pt1", 00:20:21.230 "aliases": [ 00:20:21.230 "00000000-0000-0000-0000-000000000001" 00:20:21.230 ], 00:20:21.230 "product_name": "passthru", 00:20:21.230 "block_size": 512, 00:20:21.230 "num_blocks": 65536, 00:20:21.230 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:21.230 "assigned_rate_limits": { 00:20:21.230 "rw_ios_per_sec": 0, 00:20:21.230 "rw_mbytes_per_sec": 0, 00:20:21.230 "r_mbytes_per_sec": 0, 00:20:21.230 "w_mbytes_per_sec": 0 00:20:21.230 }, 00:20:21.230 "claimed": true, 00:20:21.230 "claim_type": "exclusive_write", 00:20:21.230 "zoned": false, 00:20:21.230 "supported_io_types": { 00:20:21.230 "read": true, 00:20:21.230 "write": true, 00:20:21.230 "unmap": true, 00:20:21.230 "flush": true, 00:20:21.230 "reset": true, 00:20:21.230 "nvme_admin": false, 00:20:21.230 "nvme_io": false, 00:20:21.230 "nvme_io_md": false, 00:20:21.230 "write_zeroes": true, 00:20:21.230 "zcopy": true, 00:20:21.230 "get_zone_info": false, 00:20:21.230 "zone_management": false, 00:20:21.230 "zone_append": false, 00:20:21.230 "compare": false, 00:20:21.230 "compare_and_write": false, 00:20:21.230 "abort": true, 00:20:21.230 "seek_hole": false, 00:20:21.230 "seek_data": false, 00:20:21.230 "copy": true, 00:20:21.230 "nvme_iov_md": false 00:20:21.230 }, 00:20:21.230 "memory_domains": [ 00:20:21.230 { 00:20:21.230 "dma_device_id": "system", 00:20:21.230 "dma_device_type": 1 00:20:21.230 }, 00:20:21.230 { 00:20:21.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.230 "dma_device_type": 2 00:20:21.230 } 00:20:21.230 ], 00:20:21.230 "driver_specific": { 00:20:21.230 "passthru": { 00:20:21.230 "name": "pt1", 00:20:21.230 "base_bdev_name": "malloc1" 00:20:21.230 } 00:20:21.231 } 00:20:21.231 }' 00:20:21.231 18:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:21.231 18:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:21.231 18:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:21.231 18:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:21.231 18:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:21.488 18:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:21.488 18:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:21.488 18:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:21.488 18:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:21.488 18:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:21.488 18:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:21.488 18:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:21.488 18:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:21.488 18:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:21.488 18:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:21.746 18:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:21.746 "name": "pt2", 00:20:21.746 "aliases": [ 00:20:21.746 "00000000-0000-0000-0000-000000000002" 00:20:21.746 ], 00:20:21.746 "product_name": "passthru", 00:20:21.746 "block_size": 512, 00:20:21.746 "num_blocks": 65536, 00:20:21.746 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:21.746 "assigned_rate_limits": { 00:20:21.746 "rw_ios_per_sec": 0, 00:20:21.746 "rw_mbytes_per_sec": 0, 00:20:21.746 "r_mbytes_per_sec": 0, 00:20:21.746 "w_mbytes_per_sec": 0 00:20:21.746 }, 00:20:21.746 "claimed": true, 00:20:21.746 "claim_type": "exclusive_write", 00:20:21.746 "zoned": false, 00:20:21.746 "supported_io_types": { 00:20:21.746 "read": true, 00:20:21.746 "write": true, 00:20:21.746 "unmap": true, 00:20:21.746 "flush": true, 00:20:21.746 "reset": true, 00:20:21.746 "nvme_admin": false, 00:20:21.746 "nvme_io": false, 00:20:21.746 "nvme_io_md": false, 00:20:21.746 "write_zeroes": true, 00:20:21.746 "zcopy": true, 00:20:21.746 "get_zone_info": false, 00:20:21.746 "zone_management": false, 00:20:21.746 "zone_append": false, 00:20:21.746 "compare": false, 00:20:21.746 "compare_and_write": false, 00:20:21.746 "abort": true, 00:20:21.746 "seek_hole": false, 00:20:21.746 "seek_data": false, 00:20:21.746 "copy": true, 00:20:21.746 "nvme_iov_md": false 00:20:21.746 }, 00:20:21.746 "memory_domains": [ 00:20:21.746 { 00:20:21.746 "dma_device_id": "system", 00:20:21.746 "dma_device_type": 1 00:20:21.746 }, 00:20:21.746 { 00:20:21.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.746 "dma_device_type": 2 00:20:21.746 } 00:20:21.746 ], 00:20:21.746 "driver_specific": { 00:20:21.746 "passthru": { 00:20:21.746 "name": "pt2", 00:20:21.746 "base_bdev_name": "malloc2" 00:20:21.746 } 00:20:21.746 } 00:20:21.746 }' 00:20:21.746 18:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:22.004 18:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:22.004 18:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:22.004 18:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:22.004 18:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:22.004 18:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:22.004 18:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:22.004 18:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:22.262 18:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:22.262 18:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:22.262 18:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:22.262 18:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:22.262 18:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:22.262 18:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:22.262 18:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:22.520 18:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:22.520 "name": "pt3", 00:20:22.520 "aliases": [ 00:20:22.520 "00000000-0000-0000-0000-000000000003" 00:20:22.520 ], 00:20:22.520 "product_name": "passthru", 00:20:22.520 "block_size": 512, 00:20:22.520 "num_blocks": 65536, 00:20:22.520 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:22.520 "assigned_rate_limits": { 00:20:22.520 "rw_ios_per_sec": 0, 00:20:22.520 "rw_mbytes_per_sec": 0, 00:20:22.520 "r_mbytes_per_sec": 0, 00:20:22.520 "w_mbytes_per_sec": 0 00:20:22.520 }, 00:20:22.520 "claimed": true, 00:20:22.520 "claim_type": "exclusive_write", 00:20:22.520 "zoned": false, 00:20:22.520 "supported_io_types": { 00:20:22.520 "read": true, 00:20:22.520 "write": true, 00:20:22.520 "unmap": true, 00:20:22.520 "flush": true, 00:20:22.520 "reset": true, 00:20:22.520 "nvme_admin": false, 00:20:22.520 "nvme_io": false, 00:20:22.520 "nvme_io_md": false, 00:20:22.520 "write_zeroes": true, 00:20:22.520 "zcopy": true, 00:20:22.520 "get_zone_info": false, 00:20:22.520 "zone_management": false, 00:20:22.520 "zone_append": false, 00:20:22.520 "compare": false, 00:20:22.520 "compare_and_write": false, 00:20:22.520 "abort": true, 00:20:22.520 "seek_hole": false, 00:20:22.520 "seek_data": false, 00:20:22.520 "copy": true, 00:20:22.520 "nvme_iov_md": false 00:20:22.520 }, 00:20:22.520 "memory_domains": [ 00:20:22.520 { 00:20:22.520 "dma_device_id": "system", 00:20:22.520 "dma_device_type": 1 00:20:22.520 }, 00:20:22.520 { 00:20:22.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:22.520 "dma_device_type": 2 00:20:22.520 } 00:20:22.520 ], 00:20:22.520 "driver_specific": { 00:20:22.520 "passthru": { 00:20:22.520 "name": "pt3", 00:20:22.520 "base_bdev_name": "malloc3" 00:20:22.520 } 00:20:22.520 } 00:20:22.520 }' 00:20:22.520 18:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:22.520 18:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:22.520 18:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:22.520 18:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:22.778 18:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:22.778 18:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:22.778 18:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:22.778 18:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:22.778 18:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:22.778 18:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:22.778 18:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:22.778 18:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:22.778 18:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:22.778 18:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:20:22.778 18:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:23.037 18:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:23.037 "name": "pt4", 00:20:23.037 "aliases": [ 00:20:23.037 "00000000-0000-0000-0000-000000000004" 00:20:23.037 ], 00:20:23.037 "product_name": "passthru", 00:20:23.037 "block_size": 512, 00:20:23.037 "num_blocks": 65536, 00:20:23.037 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:23.037 "assigned_rate_limits": { 00:20:23.037 "rw_ios_per_sec": 0, 00:20:23.037 "rw_mbytes_per_sec": 0, 00:20:23.037 "r_mbytes_per_sec": 0, 00:20:23.037 "w_mbytes_per_sec": 0 00:20:23.037 }, 00:20:23.037 "claimed": true, 00:20:23.037 "claim_type": "exclusive_write", 00:20:23.037 "zoned": false, 00:20:23.037 "supported_io_types": { 00:20:23.037 "read": true, 00:20:23.037 "write": true, 00:20:23.037 "unmap": true, 00:20:23.037 "flush": true, 00:20:23.037 "reset": true, 00:20:23.037 "nvme_admin": false, 00:20:23.037 "nvme_io": false, 00:20:23.037 "nvme_io_md": false, 00:20:23.037 "write_zeroes": true, 00:20:23.037 "zcopy": true, 00:20:23.037 "get_zone_info": false, 00:20:23.037 "zone_management": false, 00:20:23.037 "zone_append": false, 00:20:23.037 "compare": false, 00:20:23.037 "compare_and_write": false, 00:20:23.037 "abort": true, 00:20:23.037 "seek_hole": false, 00:20:23.037 "seek_data": false, 00:20:23.037 "copy": true, 00:20:23.037 "nvme_iov_md": false 00:20:23.037 }, 00:20:23.037 "memory_domains": [ 00:20:23.037 { 00:20:23.037 "dma_device_id": "system", 00:20:23.037 "dma_device_type": 1 00:20:23.037 }, 00:20:23.037 { 00:20:23.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:23.037 "dma_device_type": 2 00:20:23.037 } 00:20:23.037 ], 00:20:23.037 "driver_specific": { 00:20:23.037 "passthru": { 00:20:23.037 "name": "pt4", 00:20:23.037 "base_bdev_name": "malloc4" 00:20:23.037 } 00:20:23.037 } 00:20:23.037 }' 00:20:23.037 18:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:23.294 18:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:23.295 18:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:23.295 18:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:23.295 18:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:23.295 18:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:23.295 18:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:23.295 18:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:23.553 18:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:23.553 18:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:23.553 18:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:23.553 18:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:23.553 18:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:23.553 18:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:20:23.812 [2024-07-15 18:35:09.177502] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:23.812 18:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 2cd12479-5a78-40db-a6e1-905558d2f6a5 '!=' 2cd12479-5a78-40db-a6e1-905558d2f6a5 ']' 00:20:23.812 18:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:20:23.812 18:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:23.812 18:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:23.812 18:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2862420 00:20:23.812 18:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2862420 ']' 00:20:23.812 18:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2862420 00:20:23.812 18:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:20:23.812 18:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:23.812 18:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2862420 00:20:23.812 18:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:23.812 18:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:23.812 18:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2862420' 00:20:23.812 killing process with pid 2862420 00:20:23.812 18:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2862420 00:20:23.812 [2024-07-15 18:35:09.262187] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:23.812 [2024-07-15 18:35:09.262252] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:23.812 [2024-07-15 18:35:09.262314] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:23.812 [2024-07-15 18:35:09.262324] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa5b3c0 name raid_bdev1, state offline 00:20:23.812 18:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2862420 00:20:23.812 [2024-07-15 18:35:09.297552] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:24.072 18:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:20:24.072 00:20:24.072 real 0m17.599s 00:20:24.072 user 0m32.716s 00:20:24.072 sys 0m2.357s 00:20:24.072 18:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:24.072 18:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.072 ************************************ 00:20:24.072 END TEST raid_superblock_test 00:20:24.072 ************************************ 00:20:24.072 18:35:09 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:24.072 18:35:09 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:20:24.072 18:35:09 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:24.072 18:35:09 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:24.072 18:35:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:24.072 ************************************ 00:20:24.072 START TEST raid_read_error_test 00:20:24.072 ************************************ 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 read 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.BNRkUinEy9 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2865338 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2865338 /var/tmp/spdk-raid.sock 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2865338 ']' 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:24.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:24.072 18:35:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.072 [2024-07-15 18:35:09.604198] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:20:24.072 [2024-07-15 18:35:09.604260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2865338 ] 00:20:24.331 [2024-07-15 18:35:09.704429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.331 [2024-07-15 18:35:09.799358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.331 [2024-07-15 18:35:09.864271] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:24.331 [2024-07-15 18:35:09.864305] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:25.268 18:35:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:25.268 18:35:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:20:25.268 18:35:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:25.268 18:35:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:25.526 BaseBdev1_malloc 00:20:25.527 18:35:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:25.786 true 00:20:25.786 18:35:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:26.044 [2024-07-15 18:35:11.539954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:26.044 [2024-07-15 18:35:11.539996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:26.044 [2024-07-15 18:35:11.540014] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1516d20 00:20:26.044 [2024-07-15 18:35:11.540024] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:26.044 [2024-07-15 18:35:11.541811] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:26.044 [2024-07-15 18:35:11.541840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:26.044 BaseBdev1 00:20:26.044 18:35:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:26.044 18:35:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:26.611 BaseBdev2_malloc 00:20:26.611 18:35:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:26.611 true 00:20:26.611 18:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:27.179 [2024-07-15 18:35:12.430962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:27.179 [2024-07-15 18:35:12.431004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.179 [2024-07-15 18:35:12.431020] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x151bd50 00:20:27.179 [2024-07-15 18:35:12.431030] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.179 [2024-07-15 18:35:12.432647] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.179 [2024-07-15 18:35:12.432675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:27.179 BaseBdev2 00:20:27.179 18:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:27.179 18:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:27.179 BaseBdev3_malloc 00:20:27.179 18:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:20:27.747 true 00:20:27.747 18:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:27.747 [2024-07-15 18:35:13.265694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:27.747 [2024-07-15 18:35:13.265736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.747 [2024-07-15 18:35:13.265754] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x151aef0 00:20:27.747 [2024-07-15 18:35:13.265763] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.747 [2024-07-15 18:35:13.267382] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.747 [2024-07-15 18:35:13.267415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:27.747 BaseBdev3 00:20:27.747 18:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:27.747 18:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:28.005 BaseBdev4_malloc 00:20:28.005 18:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:20:28.264 true 00:20:28.264 18:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:20:28.522 [2024-07-15 18:35:14.040396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:20:28.522 [2024-07-15 18:35:14.040436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:28.522 [2024-07-15 18:35:14.040455] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x151f280 00:20:28.522 [2024-07-15 18:35:14.040464] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:28.522 [2024-07-15 18:35:14.042050] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:28.522 [2024-07-15 18:35:14.042076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:28.522 BaseBdev4 00:20:28.522 18:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:20:28.781 [2024-07-15 18:35:14.297144] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:28.781 [2024-07-15 18:35:14.298523] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:28.781 [2024-07-15 18:35:14.298593] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:28.781 [2024-07-15 18:35:14.298653] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:28.781 [2024-07-15 18:35:14.298890] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1520d90 00:20:28.781 [2024-07-15 18:35:14.298900] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:28.781 [2024-07-15 18:35:14.299112] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x151e8d0 00:20:28.781 [2024-07-15 18:35:14.299273] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1520d90 00:20:28.781 [2024-07-15 18:35:14.299281] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1520d90 00:20:28.781 [2024-07-15 18:35:14.299385] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:28.781 18:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:20:28.781 18:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:28.781 18:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:28.781 18:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:28.781 18:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:28.781 18:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:28.781 18:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:28.781 18:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:28.781 18:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:28.781 18:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:28.781 18:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.781 18:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.039 18:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:29.039 "name": "raid_bdev1", 00:20:29.039 "uuid": "15913523-e66e-428d-a88e-1899236a41ef", 00:20:29.039 "strip_size_kb": 64, 00:20:29.039 "state": "online", 00:20:29.039 "raid_level": "concat", 00:20:29.039 "superblock": true, 00:20:29.039 "num_base_bdevs": 4, 00:20:29.039 "num_base_bdevs_discovered": 4, 00:20:29.039 "num_base_bdevs_operational": 4, 00:20:29.039 "base_bdevs_list": [ 00:20:29.039 { 00:20:29.039 "name": "BaseBdev1", 00:20:29.039 "uuid": "a0332643-8bb6-58fc-934b-e13e8abd1ba0", 00:20:29.039 "is_configured": true, 00:20:29.039 "data_offset": 2048, 00:20:29.039 "data_size": 63488 00:20:29.039 }, 00:20:29.039 { 00:20:29.039 "name": "BaseBdev2", 00:20:29.039 "uuid": "7e5a457b-d374-50e3-8d7d-683ef975fc6b", 00:20:29.039 "is_configured": true, 00:20:29.039 "data_offset": 2048, 00:20:29.039 "data_size": 63488 00:20:29.039 }, 00:20:29.039 { 00:20:29.039 "name": "BaseBdev3", 00:20:29.039 "uuid": "d126103d-d799-58c6-8162-1b6dbddfbbdd", 00:20:29.039 "is_configured": true, 00:20:29.039 "data_offset": 2048, 00:20:29.039 "data_size": 63488 00:20:29.039 }, 00:20:29.039 { 00:20:29.039 "name": "BaseBdev4", 00:20:29.039 "uuid": "b2b75ce7-490b-56fb-af44-71b32363772b", 00:20:29.039 "is_configured": true, 00:20:29.039 "data_offset": 2048, 00:20:29.039 "data_size": 63488 00:20:29.039 } 00:20:29.039 ] 00:20:29.039 }' 00:20:29.039 18:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:29.039 18:35:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.415 18:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:20:30.415 18:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:30.415 [2024-07-15 18:35:15.665097] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1524210 00:20:31.353 18:35:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:20:31.353 18:35:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:20:31.353 18:35:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:20:31.353 18:35:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:20:31.353 18:35:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:20:31.353 18:35:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:31.353 18:35:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:31.353 18:35:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:31.353 18:35:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:31.353 18:35:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:31.353 18:35:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:31.353 18:35:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:31.353 18:35:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:31.353 18:35:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:31.353 18:35:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.353 18:35:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.612 18:35:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:31.612 "name": "raid_bdev1", 00:20:31.612 "uuid": "15913523-e66e-428d-a88e-1899236a41ef", 00:20:31.612 "strip_size_kb": 64, 00:20:31.612 "state": "online", 00:20:31.612 "raid_level": "concat", 00:20:31.612 "superblock": true, 00:20:31.612 "num_base_bdevs": 4, 00:20:31.612 "num_base_bdevs_discovered": 4, 00:20:31.612 "num_base_bdevs_operational": 4, 00:20:31.612 "base_bdevs_list": [ 00:20:31.612 { 00:20:31.612 "name": "BaseBdev1", 00:20:31.612 "uuid": "a0332643-8bb6-58fc-934b-e13e8abd1ba0", 00:20:31.612 "is_configured": true, 00:20:31.612 "data_offset": 2048, 00:20:31.612 "data_size": 63488 00:20:31.612 }, 00:20:31.612 { 00:20:31.612 "name": "BaseBdev2", 00:20:31.612 "uuid": "7e5a457b-d374-50e3-8d7d-683ef975fc6b", 00:20:31.612 "is_configured": true, 00:20:31.612 "data_offset": 2048, 00:20:31.612 "data_size": 63488 00:20:31.612 }, 00:20:31.612 { 00:20:31.612 "name": "BaseBdev3", 00:20:31.612 "uuid": "d126103d-d799-58c6-8162-1b6dbddfbbdd", 00:20:31.612 "is_configured": true, 00:20:31.612 "data_offset": 2048, 00:20:31.612 "data_size": 63488 00:20:31.612 }, 00:20:31.612 { 00:20:31.612 "name": "BaseBdev4", 00:20:31.612 "uuid": "b2b75ce7-490b-56fb-af44-71b32363772b", 00:20:31.612 "is_configured": true, 00:20:31.612 "data_offset": 2048, 00:20:31.612 "data_size": 63488 00:20:31.612 } 00:20:31.612 ] 00:20:31.612 }' 00:20:31.612 18:35:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:31.612 18:35:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.178 18:35:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:32.437 [2024-07-15 18:35:17.876206] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:32.437 [2024-07-15 18:35:17.876232] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:32.437 [2024-07-15 18:35:17.879695] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:32.437 [2024-07-15 18:35:17.879733] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:32.437 [2024-07-15 18:35:17.879772] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:32.437 [2024-07-15 18:35:17.879780] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1520d90 name raid_bdev1, state offline 00:20:32.437 0 00:20:32.437 18:35:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2865338 00:20:32.437 18:35:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2865338 ']' 00:20:32.437 18:35:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2865338 00:20:32.437 18:35:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:20:32.437 18:35:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:32.437 18:35:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2865338 00:20:32.437 18:35:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:32.437 18:35:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:32.437 18:35:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2865338' 00:20:32.437 killing process with pid 2865338 00:20:32.437 18:35:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2865338 00:20:32.437 [2024-07-15 18:35:17.954333] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:32.437 18:35:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2865338 00:20:32.437 [2024-07-15 18:35:17.983616] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:32.695 18:35:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.BNRkUinEy9 00:20:32.695 18:35:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:20:32.695 18:35:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:20:32.695 18:35:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:20:32.695 18:35:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:20:32.695 18:35:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:32.695 18:35:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:32.695 18:35:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:20:32.695 00:20:32.695 real 0m8.659s 00:20:32.695 user 0m14.573s 00:20:32.695 sys 0m1.125s 00:20:32.695 18:35:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:32.695 18:35:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.695 ************************************ 00:20:32.695 END TEST raid_read_error_test 00:20:32.695 ************************************ 00:20:32.695 18:35:18 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:32.695 18:35:18 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:20:32.695 18:35:18 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:32.695 18:35:18 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:32.695 18:35:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:32.695 ************************************ 00:20:32.695 START TEST raid_write_error_test 00:20:32.695 ************************************ 00:20:32.695 18:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 write 00:20:32.695 18:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:20:32.695 18:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:20:32.695 18:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:20:32.695 18:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:20:32.695 18:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:32.695 18:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:20:32.695 18:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:32.695 18:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:32.695 18:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:20:32.695 18:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:32.695 18:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:32.695 18:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:20:32.695 18:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:32.695 18:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:32.695 18:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:20:32.695 18:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:32.695 18:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:32.695 18:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:32.695 18:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:20:32.695 18:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:20:32.695 18:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:20:32.695 18:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:20:32.695 18:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:20:32.695 18:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:20:32.695 18:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:20:32.695 18:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:20:32.695 18:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:20:32.954 18:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:20:32.954 18:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.V4hci6qeD4 00:20:32.954 18:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2866893 00:20:32.954 18:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2866893 /var/tmp/spdk-raid.sock 00:20:32.954 18:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:32.954 18:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2866893 ']' 00:20:32.954 18:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:32.954 18:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:32.954 18:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:32.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:32.954 18:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:32.954 18:35:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.954 [2024-07-15 18:35:18.313711] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:20:32.954 [2024-07-15 18:35:18.313776] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2866893 ] 00:20:32.954 [2024-07-15 18:35:18.415138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.954 [2024-07-15 18:35:18.505539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.212 [2024-07-15 18:35:18.565318] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:33.212 [2024-07-15 18:35:18.565352] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:33.778 18:35:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:33.778 18:35:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:20:33.778 18:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:33.778 18:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:34.036 BaseBdev1_malloc 00:20:34.036 18:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:34.294 true 00:20:34.294 18:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:34.553 [2024-07-15 18:35:20.028137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:34.553 [2024-07-15 18:35:20.028182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:34.553 [2024-07-15 18:35:20.028199] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26f7d20 00:20:34.553 [2024-07-15 18:35:20.028208] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:34.553 [2024-07-15 18:35:20.029855] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:34.553 [2024-07-15 18:35:20.029883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:34.553 BaseBdev1 00:20:34.553 18:35:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:34.553 18:35:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:34.811 BaseBdev2_malloc 00:20:34.811 18:35:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:35.070 true 00:20:35.070 18:35:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:35.328 [2024-07-15 18:35:20.798758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:35.328 [2024-07-15 18:35:20.798799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:35.328 [2024-07-15 18:35:20.798815] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26fcd50 00:20:35.328 [2024-07-15 18:35:20.798824] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:35.328 [2024-07-15 18:35:20.800291] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:35.328 [2024-07-15 18:35:20.800316] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:35.328 BaseBdev2 00:20:35.328 18:35:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:35.328 18:35:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:35.587 BaseBdev3_malloc 00:20:35.587 18:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:20:35.845 true 00:20:35.845 18:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:36.133 [2024-07-15 18:35:21.593246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:36.133 [2024-07-15 18:35:21.593287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:36.133 [2024-07-15 18:35:21.593303] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26fbef0 00:20:36.133 [2024-07-15 18:35:21.593312] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:36.133 [2024-07-15 18:35:21.594788] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:36.133 [2024-07-15 18:35:21.594815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:36.133 BaseBdev3 00:20:36.133 18:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:36.133 18:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:36.392 BaseBdev4_malloc 00:20:36.392 18:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:20:36.651 true 00:20:36.651 18:35:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:20:36.909 [2024-07-15 18:35:22.383682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:20:36.909 [2024-07-15 18:35:22.383723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:36.909 [2024-07-15 18:35:22.383741] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2700280 00:20:36.909 [2024-07-15 18:35:22.383751] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:36.909 [2024-07-15 18:35:22.385238] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:36.909 [2024-07-15 18:35:22.385264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:36.909 BaseBdev4 00:20:36.909 18:35:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:20:37.168 [2024-07-15 18:35:22.644408] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:37.168 [2024-07-15 18:35:22.645650] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:37.168 [2024-07-15 18:35:22.645717] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:37.168 [2024-07-15 18:35:22.645777] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:37.168 [2024-07-15 18:35:22.646023] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2701d90 00:20:37.168 [2024-07-15 18:35:22.646034] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:37.168 [2024-07-15 18:35:22.646216] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26ff8d0 00:20:37.168 [2024-07-15 18:35:22.646366] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2701d90 00:20:37.168 [2024-07-15 18:35:22.646375] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2701d90 00:20:37.168 [2024-07-15 18:35:22.646479] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:37.168 18:35:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:20:37.168 18:35:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:37.168 18:35:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:37.168 18:35:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:37.168 18:35:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:37.168 18:35:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:37.168 18:35:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:37.168 18:35:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:37.168 18:35:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:37.168 18:35:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:37.168 18:35:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.168 18:35:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.427 18:35:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:37.427 "name": "raid_bdev1", 00:20:37.427 "uuid": "4324e3b0-4108-4489-bb06-8d98228df862", 00:20:37.427 "strip_size_kb": 64, 00:20:37.427 "state": "online", 00:20:37.427 "raid_level": "concat", 00:20:37.427 "superblock": true, 00:20:37.427 "num_base_bdevs": 4, 00:20:37.427 "num_base_bdevs_discovered": 4, 00:20:37.427 "num_base_bdevs_operational": 4, 00:20:37.427 "base_bdevs_list": [ 00:20:37.427 { 00:20:37.427 "name": "BaseBdev1", 00:20:37.427 "uuid": "703317cd-4f98-5a16-b949-a4258bbb73be", 00:20:37.427 "is_configured": true, 00:20:37.427 "data_offset": 2048, 00:20:37.427 "data_size": 63488 00:20:37.427 }, 00:20:37.427 { 00:20:37.427 "name": "BaseBdev2", 00:20:37.427 "uuid": "67850a4e-eea1-54e2-9331-4c0c8b72eb1a", 00:20:37.427 "is_configured": true, 00:20:37.427 "data_offset": 2048, 00:20:37.427 "data_size": 63488 00:20:37.427 }, 00:20:37.427 { 00:20:37.427 "name": "BaseBdev3", 00:20:37.427 "uuid": "1f1b4f06-a678-50bf-b4d3-d1a9dea6c282", 00:20:37.427 "is_configured": true, 00:20:37.427 "data_offset": 2048, 00:20:37.427 "data_size": 63488 00:20:37.427 }, 00:20:37.427 { 00:20:37.427 "name": "BaseBdev4", 00:20:37.427 "uuid": "9c43241e-627b-5536-87ad-3ac8a7a20f4f", 00:20:37.427 "is_configured": true, 00:20:37.427 "data_offset": 2048, 00:20:37.427 "data_size": 63488 00:20:37.427 } 00:20:37.427 ] 00:20:37.427 }' 00:20:37.427 18:35:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:37.427 18:35:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.361 18:35:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:20:38.361 18:35:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:38.361 [2024-07-15 18:35:23.675472] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2705210 00:20:39.297 18:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:20:39.297 18:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:20:39.297 18:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:20:39.297 18:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:20:39.297 18:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:20:39.297 18:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:39.297 18:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:39.297 18:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:39.297 18:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:39.297 18:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:39.297 18:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:39.297 18:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:39.297 18:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:39.297 18:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:39.297 18:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.297 18:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.556 18:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:39.556 "name": "raid_bdev1", 00:20:39.556 "uuid": "4324e3b0-4108-4489-bb06-8d98228df862", 00:20:39.556 "strip_size_kb": 64, 00:20:39.556 "state": "online", 00:20:39.556 "raid_level": "concat", 00:20:39.556 "superblock": true, 00:20:39.556 "num_base_bdevs": 4, 00:20:39.556 "num_base_bdevs_discovered": 4, 00:20:39.556 "num_base_bdevs_operational": 4, 00:20:39.556 "base_bdevs_list": [ 00:20:39.556 { 00:20:39.556 "name": "BaseBdev1", 00:20:39.556 "uuid": "703317cd-4f98-5a16-b949-a4258bbb73be", 00:20:39.556 "is_configured": true, 00:20:39.556 "data_offset": 2048, 00:20:39.556 "data_size": 63488 00:20:39.556 }, 00:20:39.556 { 00:20:39.556 "name": "BaseBdev2", 00:20:39.556 "uuid": "67850a4e-eea1-54e2-9331-4c0c8b72eb1a", 00:20:39.556 "is_configured": true, 00:20:39.556 "data_offset": 2048, 00:20:39.556 "data_size": 63488 00:20:39.556 }, 00:20:39.556 { 00:20:39.556 "name": "BaseBdev3", 00:20:39.556 "uuid": "1f1b4f06-a678-50bf-b4d3-d1a9dea6c282", 00:20:39.556 "is_configured": true, 00:20:39.556 "data_offset": 2048, 00:20:39.556 "data_size": 63488 00:20:39.556 }, 00:20:39.556 { 00:20:39.556 "name": "BaseBdev4", 00:20:39.556 "uuid": "9c43241e-627b-5536-87ad-3ac8a7a20f4f", 00:20:39.556 "is_configured": true, 00:20:39.556 "data_offset": 2048, 00:20:39.556 "data_size": 63488 00:20:39.556 } 00:20:39.556 ] 00:20:39.556 }' 00:20:39.556 18:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:39.556 18:35:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.123 18:35:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:40.382 [2024-07-15 18:35:25.785307] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:40.382 [2024-07-15 18:35:25.785350] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:40.382 [2024-07-15 18:35:25.788762] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:40.382 [2024-07-15 18:35:25.788800] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:40.382 [2024-07-15 18:35:25.788839] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:40.382 [2024-07-15 18:35:25.788847] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2701d90 name raid_bdev1, state offline 00:20:40.382 0 00:20:40.382 18:35:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2866893 00:20:40.382 18:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2866893 ']' 00:20:40.382 18:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2866893 00:20:40.382 18:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:20:40.382 18:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:40.382 18:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2866893 00:20:40.382 18:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:40.382 18:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:40.382 18:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2866893' 00:20:40.382 killing process with pid 2866893 00:20:40.382 18:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2866893 00:20:40.382 [2024-07-15 18:35:25.858172] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:40.382 18:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2866893 00:20:40.382 [2024-07-15 18:35:25.888140] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:40.641 18:35:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.V4hci6qeD4 00:20:40.641 18:35:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:20:40.641 18:35:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:20:40.641 18:35:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.48 00:20:40.641 18:35:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:20:40.641 18:35:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:40.641 18:35:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:40.641 18:35:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.48 != \0\.\0\0 ]] 00:20:40.641 00:20:40.641 real 0m7.865s 00:20:40.641 user 0m12.982s 00:20:40.641 sys 0m1.094s 00:20:40.641 18:35:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:40.641 18:35:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.641 ************************************ 00:20:40.641 END TEST raid_write_error_test 00:20:40.641 ************************************ 00:20:40.641 18:35:26 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:40.641 18:35:26 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:20:40.641 18:35:26 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:20:40.641 18:35:26 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:40.641 18:35:26 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:40.641 18:35:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:40.641 ************************************ 00:20:40.641 START TEST raid_state_function_test 00:20:40.641 ************************************ 00:20:40.641 18:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 false 00:20:40.641 18:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:20:40.641 18:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2868069 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2868069' 00:20:40.642 Process raid pid: 2868069 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2868069 /var/tmp/spdk-raid.sock 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2868069 ']' 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:40.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:40.642 18:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.901 [2024-07-15 18:35:26.213204] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:20:40.901 [2024-07-15 18:35:26.213264] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.901 [2024-07-15 18:35:26.314314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.901 [2024-07-15 18:35:26.410779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.159 [2024-07-15 18:35:26.470070] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:41.159 [2024-07-15 18:35:26.470098] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:41.727 18:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:41.728 18:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:20:41.728 18:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:41.986 [2024-07-15 18:35:27.521299] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:41.986 [2024-07-15 18:35:27.521336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:41.986 [2024-07-15 18:35:27.521345] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:41.986 [2024-07-15 18:35:27.521354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:41.986 [2024-07-15 18:35:27.521361] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:41.986 [2024-07-15 18:35:27.521369] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:41.986 [2024-07-15 18:35:27.521375] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:41.986 [2024-07-15 18:35:27.521383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:42.245 18:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:42.245 18:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:42.245 18:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:42.245 18:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:42.245 18:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:42.245 18:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:42.245 18:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:42.245 18:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:42.245 18:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:42.245 18:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:42.245 18:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.245 18:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:42.504 18:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:42.504 "name": "Existed_Raid", 00:20:42.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.504 "strip_size_kb": 0, 00:20:42.504 "state": "configuring", 00:20:42.504 "raid_level": "raid1", 00:20:42.504 "superblock": false, 00:20:42.504 "num_base_bdevs": 4, 00:20:42.504 "num_base_bdevs_discovered": 0, 00:20:42.504 "num_base_bdevs_operational": 4, 00:20:42.504 "base_bdevs_list": [ 00:20:42.504 { 00:20:42.504 "name": "BaseBdev1", 00:20:42.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.504 "is_configured": false, 00:20:42.504 "data_offset": 0, 00:20:42.504 "data_size": 0 00:20:42.504 }, 00:20:42.504 { 00:20:42.504 "name": "BaseBdev2", 00:20:42.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.504 "is_configured": false, 00:20:42.504 "data_offset": 0, 00:20:42.504 "data_size": 0 00:20:42.504 }, 00:20:42.504 { 00:20:42.504 "name": "BaseBdev3", 00:20:42.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.504 "is_configured": false, 00:20:42.504 "data_offset": 0, 00:20:42.504 "data_size": 0 00:20:42.504 }, 00:20:42.504 { 00:20:42.504 "name": "BaseBdev4", 00:20:42.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.504 "is_configured": false, 00:20:42.504 "data_offset": 0, 00:20:42.504 "data_size": 0 00:20:42.504 } 00:20:42.504 ] 00:20:42.504 }' 00:20:42.504 18:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:42.504 18:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.442 18:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:43.700 [2024-07-15 18:35:29.073293] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:43.701 [2024-07-15 18:35:29.073322] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xeb8bc0 name Existed_Raid, state configuring 00:20:43.701 18:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:43.960 [2024-07-15 18:35:29.334011] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:43.960 [2024-07-15 18:35:29.334043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:43.960 [2024-07-15 18:35:29.334051] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:43.960 [2024-07-15 18:35:29.334059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:43.960 [2024-07-15 18:35:29.334066] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:43.960 [2024-07-15 18:35:29.334073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:43.960 [2024-07-15 18:35:29.334080] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:43.960 [2024-07-15 18:35:29.334088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:43.960 18:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:44.218 [2024-07-15 18:35:29.520019] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:44.218 BaseBdev1 00:20:44.218 18:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:44.218 18:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:44.218 18:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:44.218 18:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:44.218 18:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:44.218 18:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:44.218 18:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:44.477 18:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:44.477 [ 00:20:44.477 { 00:20:44.477 "name": "BaseBdev1", 00:20:44.477 "aliases": [ 00:20:44.477 "215801f3-5a29-451a-b41d-5d2bbcc8ee77" 00:20:44.477 ], 00:20:44.477 "product_name": "Malloc disk", 00:20:44.477 "block_size": 512, 00:20:44.477 "num_blocks": 65536, 00:20:44.477 "uuid": "215801f3-5a29-451a-b41d-5d2bbcc8ee77", 00:20:44.477 "assigned_rate_limits": { 00:20:44.477 "rw_ios_per_sec": 0, 00:20:44.477 "rw_mbytes_per_sec": 0, 00:20:44.477 "r_mbytes_per_sec": 0, 00:20:44.477 "w_mbytes_per_sec": 0 00:20:44.477 }, 00:20:44.477 "claimed": true, 00:20:44.477 "claim_type": "exclusive_write", 00:20:44.477 "zoned": false, 00:20:44.477 "supported_io_types": { 00:20:44.477 "read": true, 00:20:44.477 "write": true, 00:20:44.477 "unmap": true, 00:20:44.477 "flush": true, 00:20:44.477 "reset": true, 00:20:44.477 "nvme_admin": false, 00:20:44.477 "nvme_io": false, 00:20:44.477 "nvme_io_md": false, 00:20:44.477 "write_zeroes": true, 00:20:44.477 "zcopy": true, 00:20:44.477 "get_zone_info": false, 00:20:44.477 "zone_management": false, 00:20:44.477 "zone_append": false, 00:20:44.477 "compare": false, 00:20:44.477 "compare_and_write": false, 00:20:44.477 "abort": true, 00:20:44.477 "seek_hole": false, 00:20:44.477 "seek_data": false, 00:20:44.477 "copy": true, 00:20:44.477 "nvme_iov_md": false 00:20:44.477 }, 00:20:44.477 "memory_domains": [ 00:20:44.477 { 00:20:44.477 "dma_device_id": "system", 00:20:44.477 "dma_device_type": 1 00:20:44.477 }, 00:20:44.477 { 00:20:44.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.477 "dma_device_type": 2 00:20:44.477 } 00:20:44.477 ], 00:20:44.477 "driver_specific": {} 00:20:44.477 } 00:20:44.477 ] 00:20:44.478 18:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:44.478 18:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:44.478 18:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:44.478 18:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:44.478 18:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:44.478 18:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:44.478 18:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:44.478 18:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:44.478 18:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:44.478 18:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:44.478 18:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:44.478 18:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:44.478 18:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.736 18:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:44.736 "name": "Existed_Raid", 00:20:44.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.736 "strip_size_kb": 0, 00:20:44.736 "state": "configuring", 00:20:44.736 "raid_level": "raid1", 00:20:44.736 "superblock": false, 00:20:44.736 "num_base_bdevs": 4, 00:20:44.736 "num_base_bdevs_discovered": 1, 00:20:44.736 "num_base_bdevs_operational": 4, 00:20:44.736 "base_bdevs_list": [ 00:20:44.736 { 00:20:44.736 "name": "BaseBdev1", 00:20:44.736 "uuid": "215801f3-5a29-451a-b41d-5d2bbcc8ee77", 00:20:44.736 "is_configured": true, 00:20:44.736 "data_offset": 0, 00:20:44.736 "data_size": 65536 00:20:44.736 }, 00:20:44.736 { 00:20:44.736 "name": "BaseBdev2", 00:20:44.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.736 "is_configured": false, 00:20:44.736 "data_offset": 0, 00:20:44.736 "data_size": 0 00:20:44.736 }, 00:20:44.736 { 00:20:44.736 "name": "BaseBdev3", 00:20:44.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.736 "is_configured": false, 00:20:44.736 "data_offset": 0, 00:20:44.736 "data_size": 0 00:20:44.736 }, 00:20:44.736 { 00:20:44.736 "name": "BaseBdev4", 00:20:44.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.736 "is_configured": false, 00:20:44.736 "data_offset": 0, 00:20:44.736 "data_size": 0 00:20:44.736 } 00:20:44.736 ] 00:20:44.736 }' 00:20:44.736 18:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:44.737 18:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.302 18:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:45.560 [2024-07-15 18:35:30.959862] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:45.560 [2024-07-15 18:35:30.959898] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xeb8430 name Existed_Raid, state configuring 00:20:45.560 18:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:45.819 [2024-07-15 18:35:31.120330] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:45.819 [2024-07-15 18:35:31.121832] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:45.819 [2024-07-15 18:35:31.121862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:45.819 [2024-07-15 18:35:31.121870] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:45.819 [2024-07-15 18:35:31.121878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:45.819 [2024-07-15 18:35:31.121886] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:45.819 [2024-07-15 18:35:31.121893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:45.819 18:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:45.819 18:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:45.819 18:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:45.819 18:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:45.819 18:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:45.819 18:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:45.819 18:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:45.819 18:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:45.819 18:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:45.819 18:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:45.819 18:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:45.819 18:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:45.819 18:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.819 18:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:45.819 18:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:45.819 "name": "Existed_Raid", 00:20:45.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.819 "strip_size_kb": 0, 00:20:45.819 "state": "configuring", 00:20:45.819 "raid_level": "raid1", 00:20:45.819 "superblock": false, 00:20:45.819 "num_base_bdevs": 4, 00:20:45.819 "num_base_bdevs_discovered": 1, 00:20:45.819 "num_base_bdevs_operational": 4, 00:20:45.819 "base_bdevs_list": [ 00:20:45.819 { 00:20:45.819 "name": "BaseBdev1", 00:20:45.819 "uuid": "215801f3-5a29-451a-b41d-5d2bbcc8ee77", 00:20:45.819 "is_configured": true, 00:20:45.819 "data_offset": 0, 00:20:45.819 "data_size": 65536 00:20:45.819 }, 00:20:45.819 { 00:20:45.819 "name": "BaseBdev2", 00:20:45.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.819 "is_configured": false, 00:20:45.819 "data_offset": 0, 00:20:45.819 "data_size": 0 00:20:45.819 }, 00:20:45.819 { 00:20:45.819 "name": "BaseBdev3", 00:20:45.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.819 "is_configured": false, 00:20:45.819 "data_offset": 0, 00:20:45.819 "data_size": 0 00:20:45.819 }, 00:20:45.819 { 00:20:45.819 "name": "BaseBdev4", 00:20:45.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.819 "is_configured": false, 00:20:45.819 "data_offset": 0, 00:20:45.819 "data_size": 0 00:20:45.819 } 00:20:45.819 ] 00:20:45.819 }' 00:20:45.819 18:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:45.819 18:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.754 18:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:47.013 [2024-07-15 18:35:32.342822] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:47.013 BaseBdev2 00:20:47.013 18:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:47.013 18:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:47.013 18:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:47.013 18:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:47.013 18:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:47.013 18:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:47.013 18:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:47.013 18:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:47.272 [ 00:20:47.272 { 00:20:47.272 "name": "BaseBdev2", 00:20:47.272 "aliases": [ 00:20:47.272 "2e975932-1b63-4936-b001-c768f3c7de09" 00:20:47.272 ], 00:20:47.272 "product_name": "Malloc disk", 00:20:47.272 "block_size": 512, 00:20:47.272 "num_blocks": 65536, 00:20:47.272 "uuid": "2e975932-1b63-4936-b001-c768f3c7de09", 00:20:47.272 "assigned_rate_limits": { 00:20:47.272 "rw_ios_per_sec": 0, 00:20:47.272 "rw_mbytes_per_sec": 0, 00:20:47.272 "r_mbytes_per_sec": 0, 00:20:47.272 "w_mbytes_per_sec": 0 00:20:47.272 }, 00:20:47.272 "claimed": true, 00:20:47.272 "claim_type": "exclusive_write", 00:20:47.272 "zoned": false, 00:20:47.272 "supported_io_types": { 00:20:47.272 "read": true, 00:20:47.272 "write": true, 00:20:47.272 "unmap": true, 00:20:47.272 "flush": true, 00:20:47.272 "reset": true, 00:20:47.272 "nvme_admin": false, 00:20:47.272 "nvme_io": false, 00:20:47.272 "nvme_io_md": false, 00:20:47.272 "write_zeroes": true, 00:20:47.272 "zcopy": true, 00:20:47.272 "get_zone_info": false, 00:20:47.272 "zone_management": false, 00:20:47.272 "zone_append": false, 00:20:47.272 "compare": false, 00:20:47.272 "compare_and_write": false, 00:20:47.272 "abort": true, 00:20:47.272 "seek_hole": false, 00:20:47.272 "seek_data": false, 00:20:47.272 "copy": true, 00:20:47.272 "nvme_iov_md": false 00:20:47.272 }, 00:20:47.272 "memory_domains": [ 00:20:47.272 { 00:20:47.272 "dma_device_id": "system", 00:20:47.272 "dma_device_type": 1 00:20:47.272 }, 00:20:47.272 { 00:20:47.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.272 "dma_device_type": 2 00:20:47.272 } 00:20:47.272 ], 00:20:47.272 "driver_specific": {} 00:20:47.272 } 00:20:47.272 ] 00:20:47.272 18:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:47.272 18:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:47.272 18:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:47.272 18:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:47.272 18:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:47.272 18:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:47.272 18:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:47.272 18:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:47.272 18:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:47.272 18:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:47.272 18:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:47.272 18:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:47.272 18:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:47.272 18:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.272 18:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:47.530 18:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:47.530 "name": "Existed_Raid", 00:20:47.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.530 "strip_size_kb": 0, 00:20:47.530 "state": "configuring", 00:20:47.530 "raid_level": "raid1", 00:20:47.530 "superblock": false, 00:20:47.530 "num_base_bdevs": 4, 00:20:47.530 "num_base_bdevs_discovered": 2, 00:20:47.530 "num_base_bdevs_operational": 4, 00:20:47.530 "base_bdevs_list": [ 00:20:47.530 { 00:20:47.530 "name": "BaseBdev1", 00:20:47.530 "uuid": "215801f3-5a29-451a-b41d-5d2bbcc8ee77", 00:20:47.530 "is_configured": true, 00:20:47.530 "data_offset": 0, 00:20:47.530 "data_size": 65536 00:20:47.530 }, 00:20:47.530 { 00:20:47.530 "name": "BaseBdev2", 00:20:47.530 "uuid": "2e975932-1b63-4936-b001-c768f3c7de09", 00:20:47.530 "is_configured": true, 00:20:47.530 "data_offset": 0, 00:20:47.530 "data_size": 65536 00:20:47.530 }, 00:20:47.530 { 00:20:47.530 "name": "BaseBdev3", 00:20:47.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.530 "is_configured": false, 00:20:47.530 "data_offset": 0, 00:20:47.530 "data_size": 0 00:20:47.530 }, 00:20:47.530 { 00:20:47.530 "name": "BaseBdev4", 00:20:47.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.530 "is_configured": false, 00:20:47.530 "data_offset": 0, 00:20:47.530 "data_size": 0 00:20:47.530 } 00:20:47.530 ] 00:20:47.530 }' 00:20:47.530 18:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:47.530 18:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.097 18:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:48.355 [2024-07-15 18:35:33.665572] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:48.355 BaseBdev3 00:20:48.355 18:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:48.355 18:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:48.355 18:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:48.355 18:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:48.355 18:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:48.355 18:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:48.356 18:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:48.624 18:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:48.888 [ 00:20:48.888 { 00:20:48.888 "name": "BaseBdev3", 00:20:48.888 "aliases": [ 00:20:48.888 "b95b91a0-5458-49cd-97a1-5a325e85ec8c" 00:20:48.888 ], 00:20:48.888 "product_name": "Malloc disk", 00:20:48.888 "block_size": 512, 00:20:48.888 "num_blocks": 65536, 00:20:48.888 "uuid": "b95b91a0-5458-49cd-97a1-5a325e85ec8c", 00:20:48.888 "assigned_rate_limits": { 00:20:48.888 "rw_ios_per_sec": 0, 00:20:48.888 "rw_mbytes_per_sec": 0, 00:20:48.888 "r_mbytes_per_sec": 0, 00:20:48.888 "w_mbytes_per_sec": 0 00:20:48.888 }, 00:20:48.888 "claimed": true, 00:20:48.888 "claim_type": "exclusive_write", 00:20:48.888 "zoned": false, 00:20:48.888 "supported_io_types": { 00:20:48.889 "read": true, 00:20:48.889 "write": true, 00:20:48.889 "unmap": true, 00:20:48.889 "flush": true, 00:20:48.889 "reset": true, 00:20:48.889 "nvme_admin": false, 00:20:48.889 "nvme_io": false, 00:20:48.889 "nvme_io_md": false, 00:20:48.889 "write_zeroes": true, 00:20:48.889 "zcopy": true, 00:20:48.889 "get_zone_info": false, 00:20:48.889 "zone_management": false, 00:20:48.889 "zone_append": false, 00:20:48.889 "compare": false, 00:20:48.889 "compare_and_write": false, 00:20:48.889 "abort": true, 00:20:48.889 "seek_hole": false, 00:20:48.889 "seek_data": false, 00:20:48.889 "copy": true, 00:20:48.889 "nvme_iov_md": false 00:20:48.889 }, 00:20:48.889 "memory_domains": [ 00:20:48.889 { 00:20:48.889 "dma_device_id": "system", 00:20:48.889 "dma_device_type": 1 00:20:48.889 }, 00:20:48.889 { 00:20:48.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:48.889 "dma_device_type": 2 00:20:48.889 } 00:20:48.889 ], 00:20:48.889 "driver_specific": {} 00:20:48.889 } 00:20:48.889 ] 00:20:48.889 18:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:48.889 18:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:48.889 18:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:48.889 18:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:48.889 18:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:48.889 18:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:48.889 18:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:48.889 18:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:48.889 18:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:48.889 18:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:48.889 18:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:48.889 18:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:48.889 18:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:48.889 18:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.889 18:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:48.889 18:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:48.889 "name": "Existed_Raid", 00:20:48.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.889 "strip_size_kb": 0, 00:20:48.889 "state": "configuring", 00:20:48.889 "raid_level": "raid1", 00:20:48.889 "superblock": false, 00:20:48.889 "num_base_bdevs": 4, 00:20:48.889 "num_base_bdevs_discovered": 3, 00:20:48.889 "num_base_bdevs_operational": 4, 00:20:48.889 "base_bdevs_list": [ 00:20:48.889 { 00:20:48.889 "name": "BaseBdev1", 00:20:48.889 "uuid": "215801f3-5a29-451a-b41d-5d2bbcc8ee77", 00:20:48.889 "is_configured": true, 00:20:48.889 "data_offset": 0, 00:20:48.889 "data_size": 65536 00:20:48.889 }, 00:20:48.889 { 00:20:48.889 "name": "BaseBdev2", 00:20:48.889 "uuid": "2e975932-1b63-4936-b001-c768f3c7de09", 00:20:48.889 "is_configured": true, 00:20:48.889 "data_offset": 0, 00:20:48.889 "data_size": 65536 00:20:48.889 }, 00:20:48.889 { 00:20:48.889 "name": "BaseBdev3", 00:20:48.889 "uuid": "b95b91a0-5458-49cd-97a1-5a325e85ec8c", 00:20:48.889 "is_configured": true, 00:20:48.889 "data_offset": 0, 00:20:48.889 "data_size": 65536 00:20:48.889 }, 00:20:48.889 { 00:20:48.889 "name": "BaseBdev4", 00:20:48.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.889 "is_configured": false, 00:20:48.889 "data_offset": 0, 00:20:48.889 "data_size": 0 00:20:48.889 } 00:20:48.889 ] 00:20:48.889 }' 00:20:48.889 18:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:48.889 18:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.823 18:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:49.823 [2024-07-15 18:35:35.265114] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:49.823 [2024-07-15 18:35:35.265153] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xeb9490 00:20:49.823 [2024-07-15 18:35:35.265160] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:49.823 [2024-07-15 18:35:35.265356] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xea52d0 00:20:49.823 [2024-07-15 18:35:35.265489] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xeb9490 00:20:49.823 [2024-07-15 18:35:35.265499] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xeb9490 00:20:49.823 [2024-07-15 18:35:35.265657] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:49.823 BaseBdev4 00:20:49.823 18:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:20:49.823 18:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:20:49.823 18:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:49.823 18:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:49.823 18:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:49.823 18:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:49.823 18:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:50.082 18:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:50.339 [ 00:20:50.339 { 00:20:50.339 "name": "BaseBdev4", 00:20:50.339 "aliases": [ 00:20:50.339 "dd004b02-fa7c-4a50-9d2a-68734429f100" 00:20:50.339 ], 00:20:50.339 "product_name": "Malloc disk", 00:20:50.339 "block_size": 512, 00:20:50.340 "num_blocks": 65536, 00:20:50.340 "uuid": "dd004b02-fa7c-4a50-9d2a-68734429f100", 00:20:50.340 "assigned_rate_limits": { 00:20:50.340 "rw_ios_per_sec": 0, 00:20:50.340 "rw_mbytes_per_sec": 0, 00:20:50.340 "r_mbytes_per_sec": 0, 00:20:50.340 "w_mbytes_per_sec": 0 00:20:50.340 }, 00:20:50.340 "claimed": true, 00:20:50.340 "claim_type": "exclusive_write", 00:20:50.340 "zoned": false, 00:20:50.340 "supported_io_types": { 00:20:50.340 "read": true, 00:20:50.340 "write": true, 00:20:50.340 "unmap": true, 00:20:50.340 "flush": true, 00:20:50.340 "reset": true, 00:20:50.340 "nvme_admin": false, 00:20:50.340 "nvme_io": false, 00:20:50.340 "nvme_io_md": false, 00:20:50.340 "write_zeroes": true, 00:20:50.340 "zcopy": true, 00:20:50.340 "get_zone_info": false, 00:20:50.340 "zone_management": false, 00:20:50.340 "zone_append": false, 00:20:50.340 "compare": false, 00:20:50.340 "compare_and_write": false, 00:20:50.340 "abort": true, 00:20:50.340 "seek_hole": false, 00:20:50.340 "seek_data": false, 00:20:50.340 "copy": true, 00:20:50.340 "nvme_iov_md": false 00:20:50.340 }, 00:20:50.340 "memory_domains": [ 00:20:50.340 { 00:20:50.340 "dma_device_id": "system", 00:20:50.340 "dma_device_type": 1 00:20:50.340 }, 00:20:50.340 { 00:20:50.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:50.340 "dma_device_type": 2 00:20:50.340 } 00:20:50.340 ], 00:20:50.340 "driver_specific": {} 00:20:50.340 } 00:20:50.340 ] 00:20:50.340 18:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:50.340 18:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:50.340 18:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:50.340 18:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:20:50.340 18:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:50.340 18:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:50.340 18:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:50.340 18:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:50.340 18:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:50.340 18:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:50.340 18:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:50.340 18:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:50.340 18:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:50.340 18:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.340 18:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:50.598 18:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:50.598 "name": "Existed_Raid", 00:20:50.598 "uuid": "9728c7ee-e0e6-4760-8b8b-7815238833ef", 00:20:50.598 "strip_size_kb": 0, 00:20:50.598 "state": "online", 00:20:50.599 "raid_level": "raid1", 00:20:50.599 "superblock": false, 00:20:50.599 "num_base_bdevs": 4, 00:20:50.599 "num_base_bdevs_discovered": 4, 00:20:50.599 "num_base_bdevs_operational": 4, 00:20:50.599 "base_bdevs_list": [ 00:20:50.599 { 00:20:50.599 "name": "BaseBdev1", 00:20:50.599 "uuid": "215801f3-5a29-451a-b41d-5d2bbcc8ee77", 00:20:50.599 "is_configured": true, 00:20:50.599 "data_offset": 0, 00:20:50.599 "data_size": 65536 00:20:50.599 }, 00:20:50.599 { 00:20:50.599 "name": "BaseBdev2", 00:20:50.599 "uuid": "2e975932-1b63-4936-b001-c768f3c7de09", 00:20:50.599 "is_configured": true, 00:20:50.599 "data_offset": 0, 00:20:50.599 "data_size": 65536 00:20:50.599 }, 00:20:50.599 { 00:20:50.599 "name": "BaseBdev3", 00:20:50.599 "uuid": "b95b91a0-5458-49cd-97a1-5a325e85ec8c", 00:20:50.599 "is_configured": true, 00:20:50.599 "data_offset": 0, 00:20:50.599 "data_size": 65536 00:20:50.599 }, 00:20:50.599 { 00:20:50.599 "name": "BaseBdev4", 00:20:50.599 "uuid": "dd004b02-fa7c-4a50-9d2a-68734429f100", 00:20:50.599 "is_configured": true, 00:20:50.599 "data_offset": 0, 00:20:50.599 "data_size": 65536 00:20:50.599 } 00:20:50.599 ] 00:20:50.599 }' 00:20:50.599 18:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:50.599 18:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.166 18:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:51.166 18:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:51.166 18:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:51.166 18:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:51.166 18:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:51.166 18:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:51.166 18:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:51.166 18:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:51.424 [2024-07-15 18:35:36.938011] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:51.424 18:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:51.424 "name": "Existed_Raid", 00:20:51.424 "aliases": [ 00:20:51.424 "9728c7ee-e0e6-4760-8b8b-7815238833ef" 00:20:51.424 ], 00:20:51.424 "product_name": "Raid Volume", 00:20:51.424 "block_size": 512, 00:20:51.424 "num_blocks": 65536, 00:20:51.424 "uuid": "9728c7ee-e0e6-4760-8b8b-7815238833ef", 00:20:51.424 "assigned_rate_limits": { 00:20:51.424 "rw_ios_per_sec": 0, 00:20:51.424 "rw_mbytes_per_sec": 0, 00:20:51.424 "r_mbytes_per_sec": 0, 00:20:51.424 "w_mbytes_per_sec": 0 00:20:51.424 }, 00:20:51.424 "claimed": false, 00:20:51.424 "zoned": false, 00:20:51.424 "supported_io_types": { 00:20:51.424 "read": true, 00:20:51.424 "write": true, 00:20:51.424 "unmap": false, 00:20:51.424 "flush": false, 00:20:51.424 "reset": true, 00:20:51.424 "nvme_admin": false, 00:20:51.424 "nvme_io": false, 00:20:51.424 "nvme_io_md": false, 00:20:51.424 "write_zeroes": true, 00:20:51.424 "zcopy": false, 00:20:51.424 "get_zone_info": false, 00:20:51.424 "zone_management": false, 00:20:51.424 "zone_append": false, 00:20:51.424 "compare": false, 00:20:51.424 "compare_and_write": false, 00:20:51.424 "abort": false, 00:20:51.424 "seek_hole": false, 00:20:51.424 "seek_data": false, 00:20:51.424 "copy": false, 00:20:51.424 "nvme_iov_md": false 00:20:51.424 }, 00:20:51.424 "memory_domains": [ 00:20:51.424 { 00:20:51.424 "dma_device_id": "system", 00:20:51.424 "dma_device_type": 1 00:20:51.424 }, 00:20:51.424 { 00:20:51.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:51.424 "dma_device_type": 2 00:20:51.424 }, 00:20:51.424 { 00:20:51.424 "dma_device_id": "system", 00:20:51.424 "dma_device_type": 1 00:20:51.424 }, 00:20:51.424 { 00:20:51.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:51.424 "dma_device_type": 2 00:20:51.424 }, 00:20:51.424 { 00:20:51.424 "dma_device_id": "system", 00:20:51.424 "dma_device_type": 1 00:20:51.424 }, 00:20:51.424 { 00:20:51.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:51.424 "dma_device_type": 2 00:20:51.424 }, 00:20:51.424 { 00:20:51.425 "dma_device_id": "system", 00:20:51.425 "dma_device_type": 1 00:20:51.425 }, 00:20:51.425 { 00:20:51.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:51.425 "dma_device_type": 2 00:20:51.425 } 00:20:51.425 ], 00:20:51.425 "driver_specific": { 00:20:51.425 "raid": { 00:20:51.425 "uuid": "9728c7ee-e0e6-4760-8b8b-7815238833ef", 00:20:51.425 "strip_size_kb": 0, 00:20:51.425 "state": "online", 00:20:51.425 "raid_level": "raid1", 00:20:51.425 "superblock": false, 00:20:51.425 "num_base_bdevs": 4, 00:20:51.425 "num_base_bdevs_discovered": 4, 00:20:51.425 "num_base_bdevs_operational": 4, 00:20:51.425 "base_bdevs_list": [ 00:20:51.425 { 00:20:51.425 "name": "BaseBdev1", 00:20:51.425 "uuid": "215801f3-5a29-451a-b41d-5d2bbcc8ee77", 00:20:51.425 "is_configured": true, 00:20:51.425 "data_offset": 0, 00:20:51.425 "data_size": 65536 00:20:51.425 }, 00:20:51.425 { 00:20:51.425 "name": "BaseBdev2", 00:20:51.425 "uuid": "2e975932-1b63-4936-b001-c768f3c7de09", 00:20:51.425 "is_configured": true, 00:20:51.425 "data_offset": 0, 00:20:51.425 "data_size": 65536 00:20:51.425 }, 00:20:51.425 { 00:20:51.425 "name": "BaseBdev3", 00:20:51.425 "uuid": "b95b91a0-5458-49cd-97a1-5a325e85ec8c", 00:20:51.425 "is_configured": true, 00:20:51.425 "data_offset": 0, 00:20:51.425 "data_size": 65536 00:20:51.425 }, 00:20:51.425 { 00:20:51.425 "name": "BaseBdev4", 00:20:51.425 "uuid": "dd004b02-fa7c-4a50-9d2a-68734429f100", 00:20:51.425 "is_configured": true, 00:20:51.425 "data_offset": 0, 00:20:51.425 "data_size": 65536 00:20:51.425 } 00:20:51.425 ] 00:20:51.425 } 00:20:51.425 } 00:20:51.425 }' 00:20:51.425 18:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:51.683 18:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:51.683 BaseBdev2 00:20:51.683 BaseBdev3 00:20:51.683 BaseBdev4' 00:20:51.683 18:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:51.683 18:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:51.683 18:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:51.942 18:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:51.942 "name": "BaseBdev1", 00:20:51.942 "aliases": [ 00:20:51.942 "215801f3-5a29-451a-b41d-5d2bbcc8ee77" 00:20:51.942 ], 00:20:51.942 "product_name": "Malloc disk", 00:20:51.942 "block_size": 512, 00:20:51.942 "num_blocks": 65536, 00:20:51.942 "uuid": "215801f3-5a29-451a-b41d-5d2bbcc8ee77", 00:20:51.942 "assigned_rate_limits": { 00:20:51.942 "rw_ios_per_sec": 0, 00:20:51.942 "rw_mbytes_per_sec": 0, 00:20:51.942 "r_mbytes_per_sec": 0, 00:20:51.942 "w_mbytes_per_sec": 0 00:20:51.942 }, 00:20:51.942 "claimed": true, 00:20:51.943 "claim_type": "exclusive_write", 00:20:51.943 "zoned": false, 00:20:51.943 "supported_io_types": { 00:20:51.943 "read": true, 00:20:51.943 "write": true, 00:20:51.943 "unmap": true, 00:20:51.943 "flush": true, 00:20:51.943 "reset": true, 00:20:51.943 "nvme_admin": false, 00:20:51.943 "nvme_io": false, 00:20:51.943 "nvme_io_md": false, 00:20:51.943 "write_zeroes": true, 00:20:51.943 "zcopy": true, 00:20:51.943 "get_zone_info": false, 00:20:51.943 "zone_management": false, 00:20:51.943 "zone_append": false, 00:20:51.943 "compare": false, 00:20:51.943 "compare_and_write": false, 00:20:51.943 "abort": true, 00:20:51.943 "seek_hole": false, 00:20:51.943 "seek_data": false, 00:20:51.943 "copy": true, 00:20:51.943 "nvme_iov_md": false 00:20:51.943 }, 00:20:51.943 "memory_domains": [ 00:20:51.943 { 00:20:51.943 "dma_device_id": "system", 00:20:51.943 "dma_device_type": 1 00:20:51.943 }, 00:20:51.943 { 00:20:51.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:51.943 "dma_device_type": 2 00:20:51.943 } 00:20:51.943 ], 00:20:51.943 "driver_specific": {} 00:20:51.943 }' 00:20:51.943 18:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:51.943 18:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:51.943 18:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:51.943 18:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:51.943 18:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:52.201 18:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:52.201 18:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:52.201 18:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:52.201 18:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:52.201 18:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:52.201 18:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:52.201 18:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:52.201 18:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:52.201 18:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:52.201 18:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:52.460 18:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:52.460 "name": "BaseBdev2", 00:20:52.460 "aliases": [ 00:20:52.460 "2e975932-1b63-4936-b001-c768f3c7de09" 00:20:52.460 ], 00:20:52.460 "product_name": "Malloc disk", 00:20:52.460 "block_size": 512, 00:20:52.460 "num_blocks": 65536, 00:20:52.460 "uuid": "2e975932-1b63-4936-b001-c768f3c7de09", 00:20:52.460 "assigned_rate_limits": { 00:20:52.460 "rw_ios_per_sec": 0, 00:20:52.460 "rw_mbytes_per_sec": 0, 00:20:52.460 "r_mbytes_per_sec": 0, 00:20:52.460 "w_mbytes_per_sec": 0 00:20:52.460 }, 00:20:52.460 "claimed": true, 00:20:52.460 "claim_type": "exclusive_write", 00:20:52.460 "zoned": false, 00:20:52.460 "supported_io_types": { 00:20:52.460 "read": true, 00:20:52.460 "write": true, 00:20:52.460 "unmap": true, 00:20:52.460 "flush": true, 00:20:52.460 "reset": true, 00:20:52.460 "nvme_admin": false, 00:20:52.460 "nvme_io": false, 00:20:52.460 "nvme_io_md": false, 00:20:52.460 "write_zeroes": true, 00:20:52.460 "zcopy": true, 00:20:52.460 "get_zone_info": false, 00:20:52.460 "zone_management": false, 00:20:52.460 "zone_append": false, 00:20:52.460 "compare": false, 00:20:52.460 "compare_and_write": false, 00:20:52.460 "abort": true, 00:20:52.460 "seek_hole": false, 00:20:52.460 "seek_data": false, 00:20:52.460 "copy": true, 00:20:52.460 "nvme_iov_md": false 00:20:52.460 }, 00:20:52.460 "memory_domains": [ 00:20:52.460 { 00:20:52.460 "dma_device_id": "system", 00:20:52.460 "dma_device_type": 1 00:20:52.460 }, 00:20:52.460 { 00:20:52.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:52.460 "dma_device_type": 2 00:20:52.460 } 00:20:52.460 ], 00:20:52.460 "driver_specific": {} 00:20:52.460 }' 00:20:52.460 18:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:52.719 18:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:52.719 18:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:52.719 18:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:52.719 18:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:52.719 18:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:52.719 18:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:52.979 18:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:52.979 18:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:52.979 18:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:52.979 18:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:52.979 18:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:52.979 18:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:52.979 18:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:52.979 18:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:53.269 18:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:53.269 "name": "BaseBdev3", 00:20:53.269 "aliases": [ 00:20:53.269 "b95b91a0-5458-49cd-97a1-5a325e85ec8c" 00:20:53.269 ], 00:20:53.269 "product_name": "Malloc disk", 00:20:53.269 "block_size": 512, 00:20:53.269 "num_blocks": 65536, 00:20:53.270 "uuid": "b95b91a0-5458-49cd-97a1-5a325e85ec8c", 00:20:53.270 "assigned_rate_limits": { 00:20:53.270 "rw_ios_per_sec": 0, 00:20:53.270 "rw_mbytes_per_sec": 0, 00:20:53.270 "r_mbytes_per_sec": 0, 00:20:53.270 "w_mbytes_per_sec": 0 00:20:53.270 }, 00:20:53.270 "claimed": true, 00:20:53.270 "claim_type": "exclusive_write", 00:20:53.270 "zoned": false, 00:20:53.270 "supported_io_types": { 00:20:53.270 "read": true, 00:20:53.270 "write": true, 00:20:53.270 "unmap": true, 00:20:53.270 "flush": true, 00:20:53.270 "reset": true, 00:20:53.270 "nvme_admin": false, 00:20:53.270 "nvme_io": false, 00:20:53.270 "nvme_io_md": false, 00:20:53.270 "write_zeroes": true, 00:20:53.270 "zcopy": true, 00:20:53.270 "get_zone_info": false, 00:20:53.270 "zone_management": false, 00:20:53.270 "zone_append": false, 00:20:53.270 "compare": false, 00:20:53.270 "compare_and_write": false, 00:20:53.270 "abort": true, 00:20:53.270 "seek_hole": false, 00:20:53.270 "seek_data": false, 00:20:53.270 "copy": true, 00:20:53.270 "nvme_iov_md": false 00:20:53.270 }, 00:20:53.270 "memory_domains": [ 00:20:53.270 { 00:20:53.270 "dma_device_id": "system", 00:20:53.270 "dma_device_type": 1 00:20:53.270 }, 00:20:53.270 { 00:20:53.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:53.270 "dma_device_type": 2 00:20:53.270 } 00:20:53.270 ], 00:20:53.270 "driver_specific": {} 00:20:53.270 }' 00:20:53.270 18:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:53.270 18:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:53.270 18:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:53.270 18:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:53.531 18:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:53.531 18:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:53.531 18:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:53.531 18:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:53.531 18:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:53.531 18:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:53.531 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:53.531 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:53.531 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:53.531 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:53.531 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:53.790 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:53.790 "name": "BaseBdev4", 00:20:53.790 "aliases": [ 00:20:53.790 "dd004b02-fa7c-4a50-9d2a-68734429f100" 00:20:53.790 ], 00:20:53.790 "product_name": "Malloc disk", 00:20:53.790 "block_size": 512, 00:20:53.790 "num_blocks": 65536, 00:20:53.790 "uuid": "dd004b02-fa7c-4a50-9d2a-68734429f100", 00:20:53.790 "assigned_rate_limits": { 00:20:53.790 "rw_ios_per_sec": 0, 00:20:53.790 "rw_mbytes_per_sec": 0, 00:20:53.790 "r_mbytes_per_sec": 0, 00:20:53.790 "w_mbytes_per_sec": 0 00:20:53.790 }, 00:20:53.790 "claimed": true, 00:20:53.790 "claim_type": "exclusive_write", 00:20:53.790 "zoned": false, 00:20:53.790 "supported_io_types": { 00:20:53.790 "read": true, 00:20:53.790 "write": true, 00:20:53.790 "unmap": true, 00:20:53.790 "flush": true, 00:20:53.790 "reset": true, 00:20:53.790 "nvme_admin": false, 00:20:53.790 "nvme_io": false, 00:20:53.790 "nvme_io_md": false, 00:20:53.790 "write_zeroes": true, 00:20:53.790 "zcopy": true, 00:20:53.790 "get_zone_info": false, 00:20:53.790 "zone_management": false, 00:20:53.790 "zone_append": false, 00:20:53.790 "compare": false, 00:20:53.790 "compare_and_write": false, 00:20:53.790 "abort": true, 00:20:53.790 "seek_hole": false, 00:20:53.790 "seek_data": false, 00:20:53.790 "copy": true, 00:20:53.790 "nvme_iov_md": false 00:20:53.790 }, 00:20:53.790 "memory_domains": [ 00:20:53.790 { 00:20:53.790 "dma_device_id": "system", 00:20:53.790 "dma_device_type": 1 00:20:53.790 }, 00:20:53.790 { 00:20:53.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:53.790 "dma_device_type": 2 00:20:53.790 } 00:20:53.790 ], 00:20:53.790 "driver_specific": {} 00:20:53.790 }' 00:20:53.790 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:54.048 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:54.048 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:54.048 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:54.048 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:54.048 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:54.048 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:54.048 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:54.048 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:54.048 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:54.307 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:54.307 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:54.307 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:54.565 [2024-07-15 18:35:39.889682] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:54.565 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:54.565 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:20:54.565 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:54.565 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:20:54.565 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:20:54.565 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:54.565 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:54.565 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:54.565 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:54.565 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:54.565 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:54.565 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:54.565 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:54.565 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:54.565 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:54.565 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.565 18:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:54.824 18:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:54.824 "name": "Existed_Raid", 00:20:54.824 "uuid": "9728c7ee-e0e6-4760-8b8b-7815238833ef", 00:20:54.824 "strip_size_kb": 0, 00:20:54.824 "state": "online", 00:20:54.824 "raid_level": "raid1", 00:20:54.824 "superblock": false, 00:20:54.824 "num_base_bdevs": 4, 00:20:54.824 "num_base_bdevs_discovered": 3, 00:20:54.824 "num_base_bdevs_operational": 3, 00:20:54.824 "base_bdevs_list": [ 00:20:54.824 { 00:20:54.824 "name": null, 00:20:54.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.824 "is_configured": false, 00:20:54.824 "data_offset": 0, 00:20:54.824 "data_size": 65536 00:20:54.824 }, 00:20:54.824 { 00:20:54.824 "name": "BaseBdev2", 00:20:54.824 "uuid": "2e975932-1b63-4936-b001-c768f3c7de09", 00:20:54.824 "is_configured": true, 00:20:54.824 "data_offset": 0, 00:20:54.824 "data_size": 65536 00:20:54.824 }, 00:20:54.824 { 00:20:54.824 "name": "BaseBdev3", 00:20:54.824 "uuid": "b95b91a0-5458-49cd-97a1-5a325e85ec8c", 00:20:54.824 "is_configured": true, 00:20:54.824 "data_offset": 0, 00:20:54.824 "data_size": 65536 00:20:54.824 }, 00:20:54.824 { 00:20:54.824 "name": "BaseBdev4", 00:20:54.824 "uuid": "dd004b02-fa7c-4a50-9d2a-68734429f100", 00:20:54.824 "is_configured": true, 00:20:54.824 "data_offset": 0, 00:20:54.824 "data_size": 65536 00:20:54.824 } 00:20:54.824 ] 00:20:54.824 }' 00:20:54.824 18:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:54.824 18:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.391 18:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:55.391 18:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:55.391 18:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.391 18:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:55.650 18:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:55.650 18:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:55.650 18:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:56.218 [2024-07-15 18:35:41.543165] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:56.218 18:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:56.218 18:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:56.218 18:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.218 18:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:56.476 18:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:56.476 18:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:56.476 18:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:57.045 [2024-07-15 18:35:42.303734] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:57.045 18:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:57.045 18:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:57.045 18:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.045 18:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:57.304 18:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:57.304 18:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:57.304 18:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:20:57.564 [2024-07-15 18:35:43.072460] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:57.564 [2024-07-15 18:35:43.072535] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:57.564 [2024-07-15 18:35:43.083171] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:57.564 [2024-07-15 18:35:43.083202] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:57.564 [2024-07-15 18:35:43.083210] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xeb9490 name Existed_Raid, state offline 00:20:57.564 18:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:57.564 18:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:57.564 18:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.564 18:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:57.824 18:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:57.824 18:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:57.824 18:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:20:57.824 18:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:57.824 18:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:57.824 18:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:58.083 BaseBdev2 00:20:58.083 18:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:58.083 18:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:58.083 18:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:58.083 18:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:58.083 18:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:58.083 18:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:58.083 18:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:58.651 18:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:58.911 [ 00:20:58.911 { 00:20:58.911 "name": "BaseBdev2", 00:20:58.911 "aliases": [ 00:20:58.911 "a7c6ace8-426e-464f-9cf3-2a745a95f601" 00:20:58.911 ], 00:20:58.911 "product_name": "Malloc disk", 00:20:58.911 "block_size": 512, 00:20:58.911 "num_blocks": 65536, 00:20:58.911 "uuid": "a7c6ace8-426e-464f-9cf3-2a745a95f601", 00:20:58.911 "assigned_rate_limits": { 00:20:58.911 "rw_ios_per_sec": 0, 00:20:58.911 "rw_mbytes_per_sec": 0, 00:20:58.911 "r_mbytes_per_sec": 0, 00:20:58.911 "w_mbytes_per_sec": 0 00:20:58.911 }, 00:20:58.911 "claimed": false, 00:20:58.911 "zoned": false, 00:20:58.911 "supported_io_types": { 00:20:58.911 "read": true, 00:20:58.911 "write": true, 00:20:58.911 "unmap": true, 00:20:58.911 "flush": true, 00:20:58.911 "reset": true, 00:20:58.911 "nvme_admin": false, 00:20:58.911 "nvme_io": false, 00:20:58.911 "nvme_io_md": false, 00:20:58.911 "write_zeroes": true, 00:20:58.911 "zcopy": true, 00:20:58.911 "get_zone_info": false, 00:20:58.911 "zone_management": false, 00:20:58.911 "zone_append": false, 00:20:58.911 "compare": false, 00:20:58.911 "compare_and_write": false, 00:20:58.911 "abort": true, 00:20:58.911 "seek_hole": false, 00:20:58.911 "seek_data": false, 00:20:58.911 "copy": true, 00:20:58.911 "nvme_iov_md": false 00:20:58.911 }, 00:20:58.911 "memory_domains": [ 00:20:58.911 { 00:20:58.911 "dma_device_id": "system", 00:20:58.911 "dma_device_type": 1 00:20:58.911 }, 00:20:58.911 { 00:20:58.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:58.911 "dma_device_type": 2 00:20:58.911 } 00:20:58.911 ], 00:20:58.911 "driver_specific": {} 00:20:58.911 } 00:20:58.911 ] 00:20:58.911 18:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:58.911 18:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:58.911 18:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:58.911 18:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:59.479 BaseBdev3 00:20:59.479 18:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:59.479 18:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:59.479 18:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:59.479 18:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:59.479 18:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:59.479 18:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:59.479 18:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:59.738 18:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:00.306 [ 00:21:00.306 { 00:21:00.306 "name": "BaseBdev3", 00:21:00.306 "aliases": [ 00:21:00.306 "e07eef8d-5bd7-44d8-93b1-d2889e9c7157" 00:21:00.306 ], 00:21:00.306 "product_name": "Malloc disk", 00:21:00.306 "block_size": 512, 00:21:00.306 "num_blocks": 65536, 00:21:00.306 "uuid": "e07eef8d-5bd7-44d8-93b1-d2889e9c7157", 00:21:00.306 "assigned_rate_limits": { 00:21:00.306 "rw_ios_per_sec": 0, 00:21:00.306 "rw_mbytes_per_sec": 0, 00:21:00.306 "r_mbytes_per_sec": 0, 00:21:00.306 "w_mbytes_per_sec": 0 00:21:00.306 }, 00:21:00.306 "claimed": false, 00:21:00.306 "zoned": false, 00:21:00.306 "supported_io_types": { 00:21:00.306 "read": true, 00:21:00.306 "write": true, 00:21:00.306 "unmap": true, 00:21:00.306 "flush": true, 00:21:00.306 "reset": true, 00:21:00.306 "nvme_admin": false, 00:21:00.306 "nvme_io": false, 00:21:00.306 "nvme_io_md": false, 00:21:00.306 "write_zeroes": true, 00:21:00.306 "zcopy": true, 00:21:00.306 "get_zone_info": false, 00:21:00.306 "zone_management": false, 00:21:00.306 "zone_append": false, 00:21:00.306 "compare": false, 00:21:00.306 "compare_and_write": false, 00:21:00.306 "abort": true, 00:21:00.306 "seek_hole": false, 00:21:00.306 "seek_data": false, 00:21:00.306 "copy": true, 00:21:00.306 "nvme_iov_md": false 00:21:00.306 }, 00:21:00.306 "memory_domains": [ 00:21:00.306 { 00:21:00.306 "dma_device_id": "system", 00:21:00.306 "dma_device_type": 1 00:21:00.306 }, 00:21:00.306 { 00:21:00.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:00.306 "dma_device_type": 2 00:21:00.306 } 00:21:00.306 ], 00:21:00.306 "driver_specific": {} 00:21:00.306 } 00:21:00.306 ] 00:21:00.306 18:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:00.306 18:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:00.306 18:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:00.306 18:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:00.565 BaseBdev4 00:21:00.565 18:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:21:00.565 18:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:21:00.565 18:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:00.565 18:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:00.565 18:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:00.565 18:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:00.565 18:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:00.823 18:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:01.082 [ 00:21:01.082 { 00:21:01.082 "name": "BaseBdev4", 00:21:01.082 "aliases": [ 00:21:01.082 "c4e1f9d5-3ec6-4edf-b13f-64d5528dde02" 00:21:01.082 ], 00:21:01.082 "product_name": "Malloc disk", 00:21:01.082 "block_size": 512, 00:21:01.082 "num_blocks": 65536, 00:21:01.082 "uuid": "c4e1f9d5-3ec6-4edf-b13f-64d5528dde02", 00:21:01.082 "assigned_rate_limits": { 00:21:01.082 "rw_ios_per_sec": 0, 00:21:01.082 "rw_mbytes_per_sec": 0, 00:21:01.082 "r_mbytes_per_sec": 0, 00:21:01.082 "w_mbytes_per_sec": 0 00:21:01.082 }, 00:21:01.082 "claimed": false, 00:21:01.082 "zoned": false, 00:21:01.082 "supported_io_types": { 00:21:01.082 "read": true, 00:21:01.082 "write": true, 00:21:01.082 "unmap": true, 00:21:01.082 "flush": true, 00:21:01.082 "reset": true, 00:21:01.082 "nvme_admin": false, 00:21:01.082 "nvme_io": false, 00:21:01.082 "nvme_io_md": false, 00:21:01.082 "write_zeroes": true, 00:21:01.082 "zcopy": true, 00:21:01.082 "get_zone_info": false, 00:21:01.082 "zone_management": false, 00:21:01.082 "zone_append": false, 00:21:01.082 "compare": false, 00:21:01.082 "compare_and_write": false, 00:21:01.082 "abort": true, 00:21:01.082 "seek_hole": false, 00:21:01.082 "seek_data": false, 00:21:01.082 "copy": true, 00:21:01.082 "nvme_iov_md": false 00:21:01.082 }, 00:21:01.082 "memory_domains": [ 00:21:01.082 { 00:21:01.082 "dma_device_id": "system", 00:21:01.082 "dma_device_type": 1 00:21:01.082 }, 00:21:01.082 { 00:21:01.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:01.082 "dma_device_type": 2 00:21:01.082 } 00:21:01.082 ], 00:21:01.082 "driver_specific": {} 00:21:01.082 } 00:21:01.082 ] 00:21:01.082 18:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:01.082 18:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:01.082 18:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:01.082 18:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:01.339 [2024-07-15 18:35:46.858232] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:01.339 [2024-07-15 18:35:46.858269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:01.339 [2024-07-15 18:35:46.858287] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:01.339 [2024-07-15 18:35:46.859669] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:01.339 [2024-07-15 18:35:46.859711] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:01.339 18:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:01.339 18:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:01.339 18:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:01.339 18:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:01.339 18:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:01.339 18:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:01.339 18:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:01.339 18:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:01.339 18:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:01.597 18:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:01.597 18:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.597 18:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:01.597 18:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:01.597 "name": "Existed_Raid", 00:21:01.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.597 "strip_size_kb": 0, 00:21:01.597 "state": "configuring", 00:21:01.597 "raid_level": "raid1", 00:21:01.597 "superblock": false, 00:21:01.597 "num_base_bdevs": 4, 00:21:01.597 "num_base_bdevs_discovered": 3, 00:21:01.597 "num_base_bdevs_operational": 4, 00:21:01.597 "base_bdevs_list": [ 00:21:01.597 { 00:21:01.597 "name": "BaseBdev1", 00:21:01.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.597 "is_configured": false, 00:21:01.597 "data_offset": 0, 00:21:01.597 "data_size": 0 00:21:01.597 }, 00:21:01.597 { 00:21:01.597 "name": "BaseBdev2", 00:21:01.597 "uuid": "a7c6ace8-426e-464f-9cf3-2a745a95f601", 00:21:01.597 "is_configured": true, 00:21:01.597 "data_offset": 0, 00:21:01.597 "data_size": 65536 00:21:01.597 }, 00:21:01.597 { 00:21:01.597 "name": "BaseBdev3", 00:21:01.597 "uuid": "e07eef8d-5bd7-44d8-93b1-d2889e9c7157", 00:21:01.597 "is_configured": true, 00:21:01.597 "data_offset": 0, 00:21:01.597 "data_size": 65536 00:21:01.597 }, 00:21:01.597 { 00:21:01.597 "name": "BaseBdev4", 00:21:01.597 "uuid": "c4e1f9d5-3ec6-4edf-b13f-64d5528dde02", 00:21:01.597 "is_configured": true, 00:21:01.597 "data_offset": 0, 00:21:01.597 "data_size": 65536 00:21:01.597 } 00:21:01.597 ] 00:21:01.597 }' 00:21:01.597 18:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:01.597 18:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.532 18:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:02.532 [2024-07-15 18:35:47.909056] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:02.532 18:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:02.532 18:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:02.532 18:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:02.532 18:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:02.532 18:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:02.532 18:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:02.532 18:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:02.532 18:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:02.532 18:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:02.532 18:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:02.532 18:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.532 18:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:02.789 18:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:02.789 "name": "Existed_Raid", 00:21:02.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.789 "strip_size_kb": 0, 00:21:02.789 "state": "configuring", 00:21:02.789 "raid_level": "raid1", 00:21:02.789 "superblock": false, 00:21:02.789 "num_base_bdevs": 4, 00:21:02.789 "num_base_bdevs_discovered": 2, 00:21:02.789 "num_base_bdevs_operational": 4, 00:21:02.789 "base_bdevs_list": [ 00:21:02.789 { 00:21:02.789 "name": "BaseBdev1", 00:21:02.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.789 "is_configured": false, 00:21:02.789 "data_offset": 0, 00:21:02.789 "data_size": 0 00:21:02.789 }, 00:21:02.789 { 00:21:02.790 "name": null, 00:21:02.790 "uuid": "a7c6ace8-426e-464f-9cf3-2a745a95f601", 00:21:02.790 "is_configured": false, 00:21:02.790 "data_offset": 0, 00:21:02.790 "data_size": 65536 00:21:02.790 }, 00:21:02.790 { 00:21:02.790 "name": "BaseBdev3", 00:21:02.790 "uuid": "e07eef8d-5bd7-44d8-93b1-d2889e9c7157", 00:21:02.790 "is_configured": true, 00:21:02.790 "data_offset": 0, 00:21:02.790 "data_size": 65536 00:21:02.790 }, 00:21:02.790 { 00:21:02.790 "name": "BaseBdev4", 00:21:02.790 "uuid": "c4e1f9d5-3ec6-4edf-b13f-64d5528dde02", 00:21:02.790 "is_configured": true, 00:21:02.790 "data_offset": 0, 00:21:02.790 "data_size": 65536 00:21:02.790 } 00:21:02.790 ] 00:21:02.790 }' 00:21:02.790 18:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:02.790 18:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.722 18:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.722 18:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:03.722 18:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:21:03.722 18:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:04.289 [2024-07-15 18:35:49.697166] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:04.289 BaseBdev1 00:21:04.289 18:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:21:04.289 18:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:04.289 18:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:04.289 18:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:04.289 18:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:04.289 18:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:04.289 18:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:04.857 18:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:05.425 [ 00:21:05.425 { 00:21:05.425 "name": "BaseBdev1", 00:21:05.425 "aliases": [ 00:21:05.425 "b008c28a-593b-41d8-a5ff-7007693041b1" 00:21:05.425 ], 00:21:05.425 "product_name": "Malloc disk", 00:21:05.425 "block_size": 512, 00:21:05.425 "num_blocks": 65536, 00:21:05.425 "uuid": "b008c28a-593b-41d8-a5ff-7007693041b1", 00:21:05.425 "assigned_rate_limits": { 00:21:05.425 "rw_ios_per_sec": 0, 00:21:05.425 "rw_mbytes_per_sec": 0, 00:21:05.425 "r_mbytes_per_sec": 0, 00:21:05.425 "w_mbytes_per_sec": 0 00:21:05.425 }, 00:21:05.425 "claimed": true, 00:21:05.425 "claim_type": "exclusive_write", 00:21:05.425 "zoned": false, 00:21:05.425 "supported_io_types": { 00:21:05.425 "read": true, 00:21:05.425 "write": true, 00:21:05.425 "unmap": true, 00:21:05.425 "flush": true, 00:21:05.425 "reset": true, 00:21:05.425 "nvme_admin": false, 00:21:05.425 "nvme_io": false, 00:21:05.425 "nvme_io_md": false, 00:21:05.425 "write_zeroes": true, 00:21:05.425 "zcopy": true, 00:21:05.425 "get_zone_info": false, 00:21:05.425 "zone_management": false, 00:21:05.425 "zone_append": false, 00:21:05.425 "compare": false, 00:21:05.425 "compare_and_write": false, 00:21:05.425 "abort": true, 00:21:05.425 "seek_hole": false, 00:21:05.425 "seek_data": false, 00:21:05.425 "copy": true, 00:21:05.425 "nvme_iov_md": false 00:21:05.425 }, 00:21:05.425 "memory_domains": [ 00:21:05.425 { 00:21:05.425 "dma_device_id": "system", 00:21:05.425 "dma_device_type": 1 00:21:05.425 }, 00:21:05.425 { 00:21:05.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.425 "dma_device_type": 2 00:21:05.425 } 00:21:05.425 ], 00:21:05.425 "driver_specific": {} 00:21:05.425 } 00:21:05.425 ] 00:21:05.425 18:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:05.425 18:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:05.425 18:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:05.425 18:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:05.425 18:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:05.425 18:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:05.425 18:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:05.425 18:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:05.425 18:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:05.425 18:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:05.425 18:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:05.425 18:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.425 18:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:05.684 18:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:05.684 "name": "Existed_Raid", 00:21:05.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.684 "strip_size_kb": 0, 00:21:05.684 "state": "configuring", 00:21:05.684 "raid_level": "raid1", 00:21:05.684 "superblock": false, 00:21:05.684 "num_base_bdevs": 4, 00:21:05.684 "num_base_bdevs_discovered": 3, 00:21:05.684 "num_base_bdevs_operational": 4, 00:21:05.684 "base_bdevs_list": [ 00:21:05.684 { 00:21:05.684 "name": "BaseBdev1", 00:21:05.684 "uuid": "b008c28a-593b-41d8-a5ff-7007693041b1", 00:21:05.684 "is_configured": true, 00:21:05.684 "data_offset": 0, 00:21:05.684 "data_size": 65536 00:21:05.684 }, 00:21:05.684 { 00:21:05.684 "name": null, 00:21:05.684 "uuid": "a7c6ace8-426e-464f-9cf3-2a745a95f601", 00:21:05.684 "is_configured": false, 00:21:05.684 "data_offset": 0, 00:21:05.684 "data_size": 65536 00:21:05.684 }, 00:21:05.684 { 00:21:05.684 "name": "BaseBdev3", 00:21:05.684 "uuid": "e07eef8d-5bd7-44d8-93b1-d2889e9c7157", 00:21:05.684 "is_configured": true, 00:21:05.684 "data_offset": 0, 00:21:05.684 "data_size": 65536 00:21:05.684 }, 00:21:05.684 { 00:21:05.684 "name": "BaseBdev4", 00:21:05.684 "uuid": "c4e1f9d5-3ec6-4edf-b13f-64d5528dde02", 00:21:05.684 "is_configured": true, 00:21:05.684 "data_offset": 0, 00:21:05.684 "data_size": 65536 00:21:05.684 } 00:21:05.684 ] 00:21:05.684 }' 00:21:05.684 18:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:05.684 18:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.251 18:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.251 18:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:06.510 18:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:21:06.510 18:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:21:06.768 [2024-07-15 18:35:52.095699] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:06.768 18:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:06.768 18:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:06.768 18:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:06.768 18:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:06.768 18:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:06.768 18:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:06.768 18:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:06.768 18:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:06.768 18:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:06.768 18:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:06.768 18:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.768 18:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:07.026 18:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:07.026 "name": "Existed_Raid", 00:21:07.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.026 "strip_size_kb": 0, 00:21:07.026 "state": "configuring", 00:21:07.026 "raid_level": "raid1", 00:21:07.026 "superblock": false, 00:21:07.026 "num_base_bdevs": 4, 00:21:07.026 "num_base_bdevs_discovered": 2, 00:21:07.026 "num_base_bdevs_operational": 4, 00:21:07.026 "base_bdevs_list": [ 00:21:07.026 { 00:21:07.026 "name": "BaseBdev1", 00:21:07.026 "uuid": "b008c28a-593b-41d8-a5ff-7007693041b1", 00:21:07.026 "is_configured": true, 00:21:07.026 "data_offset": 0, 00:21:07.026 "data_size": 65536 00:21:07.026 }, 00:21:07.026 { 00:21:07.026 "name": null, 00:21:07.026 "uuid": "a7c6ace8-426e-464f-9cf3-2a745a95f601", 00:21:07.026 "is_configured": false, 00:21:07.026 "data_offset": 0, 00:21:07.026 "data_size": 65536 00:21:07.026 }, 00:21:07.026 { 00:21:07.026 "name": null, 00:21:07.026 "uuid": "e07eef8d-5bd7-44d8-93b1-d2889e9c7157", 00:21:07.026 "is_configured": false, 00:21:07.026 "data_offset": 0, 00:21:07.026 "data_size": 65536 00:21:07.026 }, 00:21:07.026 { 00:21:07.026 "name": "BaseBdev4", 00:21:07.026 "uuid": "c4e1f9d5-3ec6-4edf-b13f-64d5528dde02", 00:21:07.026 "is_configured": true, 00:21:07.026 "data_offset": 0, 00:21:07.026 "data_size": 65536 00:21:07.026 } 00:21:07.026 ] 00:21:07.026 }' 00:21:07.026 18:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:07.026 18:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.592 18:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.592 18:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:07.860 18:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:21:07.860 18:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:08.121 [2024-07-15 18:35:53.435326] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:08.121 18:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:08.121 18:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:08.121 18:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:08.121 18:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:08.121 18:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:08.121 18:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:08.121 18:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:08.121 18:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:08.122 18:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:08.122 18:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:08.122 18:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.122 18:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:08.380 18:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:08.380 "name": "Existed_Raid", 00:21:08.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.380 "strip_size_kb": 0, 00:21:08.380 "state": "configuring", 00:21:08.380 "raid_level": "raid1", 00:21:08.380 "superblock": false, 00:21:08.380 "num_base_bdevs": 4, 00:21:08.380 "num_base_bdevs_discovered": 3, 00:21:08.380 "num_base_bdevs_operational": 4, 00:21:08.380 "base_bdevs_list": [ 00:21:08.380 { 00:21:08.380 "name": "BaseBdev1", 00:21:08.380 "uuid": "b008c28a-593b-41d8-a5ff-7007693041b1", 00:21:08.380 "is_configured": true, 00:21:08.380 "data_offset": 0, 00:21:08.380 "data_size": 65536 00:21:08.380 }, 00:21:08.380 { 00:21:08.380 "name": null, 00:21:08.380 "uuid": "a7c6ace8-426e-464f-9cf3-2a745a95f601", 00:21:08.380 "is_configured": false, 00:21:08.380 "data_offset": 0, 00:21:08.380 "data_size": 65536 00:21:08.380 }, 00:21:08.380 { 00:21:08.380 "name": "BaseBdev3", 00:21:08.380 "uuid": "e07eef8d-5bd7-44d8-93b1-d2889e9c7157", 00:21:08.380 "is_configured": true, 00:21:08.380 "data_offset": 0, 00:21:08.380 "data_size": 65536 00:21:08.380 }, 00:21:08.380 { 00:21:08.380 "name": "BaseBdev4", 00:21:08.380 "uuid": "c4e1f9d5-3ec6-4edf-b13f-64d5528dde02", 00:21:08.380 "is_configured": true, 00:21:08.380 "data_offset": 0, 00:21:08.380 "data_size": 65536 00:21:08.380 } 00:21:08.380 ] 00:21:08.380 }' 00:21:08.380 18:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:08.380 18:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.946 18:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.946 18:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:09.204 18:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:21:09.204 18:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:09.462 [2024-07-15 18:35:54.843144] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:09.462 18:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:09.462 18:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:09.462 18:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:09.462 18:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:09.462 18:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:09.463 18:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:09.463 18:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:09.463 18:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:09.463 18:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:09.463 18:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:09.463 18:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:09.463 18:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.721 18:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:09.721 "name": "Existed_Raid", 00:21:09.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.721 "strip_size_kb": 0, 00:21:09.721 "state": "configuring", 00:21:09.721 "raid_level": "raid1", 00:21:09.721 "superblock": false, 00:21:09.721 "num_base_bdevs": 4, 00:21:09.721 "num_base_bdevs_discovered": 2, 00:21:09.721 "num_base_bdevs_operational": 4, 00:21:09.721 "base_bdevs_list": [ 00:21:09.721 { 00:21:09.721 "name": null, 00:21:09.721 "uuid": "b008c28a-593b-41d8-a5ff-7007693041b1", 00:21:09.721 "is_configured": false, 00:21:09.721 "data_offset": 0, 00:21:09.721 "data_size": 65536 00:21:09.721 }, 00:21:09.721 { 00:21:09.721 "name": null, 00:21:09.721 "uuid": "a7c6ace8-426e-464f-9cf3-2a745a95f601", 00:21:09.721 "is_configured": false, 00:21:09.721 "data_offset": 0, 00:21:09.721 "data_size": 65536 00:21:09.721 }, 00:21:09.721 { 00:21:09.721 "name": "BaseBdev3", 00:21:09.721 "uuid": "e07eef8d-5bd7-44d8-93b1-d2889e9c7157", 00:21:09.721 "is_configured": true, 00:21:09.721 "data_offset": 0, 00:21:09.721 "data_size": 65536 00:21:09.721 }, 00:21:09.721 { 00:21:09.721 "name": "BaseBdev4", 00:21:09.721 "uuid": "c4e1f9d5-3ec6-4edf-b13f-64d5528dde02", 00:21:09.721 "is_configured": true, 00:21:09.721 "data_offset": 0, 00:21:09.721 "data_size": 65536 00:21:09.721 } 00:21:09.721 ] 00:21:09.721 }' 00:21:09.721 18:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:09.721 18:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.288 18:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.288 18:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:10.581 18:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:21:10.581 18:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:10.842 [2024-07-15 18:35:56.237354] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:10.842 18:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:10.842 18:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:10.842 18:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:10.842 18:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:10.842 18:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:10.842 18:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:10.842 18:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:10.842 18:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:10.842 18:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:10.842 18:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:10.842 18:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.843 18:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:11.100 18:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:11.100 "name": "Existed_Raid", 00:21:11.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.100 "strip_size_kb": 0, 00:21:11.100 "state": "configuring", 00:21:11.100 "raid_level": "raid1", 00:21:11.100 "superblock": false, 00:21:11.100 "num_base_bdevs": 4, 00:21:11.100 "num_base_bdevs_discovered": 3, 00:21:11.100 "num_base_bdevs_operational": 4, 00:21:11.100 "base_bdevs_list": [ 00:21:11.100 { 00:21:11.100 "name": null, 00:21:11.100 "uuid": "b008c28a-593b-41d8-a5ff-7007693041b1", 00:21:11.100 "is_configured": false, 00:21:11.100 "data_offset": 0, 00:21:11.100 "data_size": 65536 00:21:11.100 }, 00:21:11.100 { 00:21:11.100 "name": "BaseBdev2", 00:21:11.100 "uuid": "a7c6ace8-426e-464f-9cf3-2a745a95f601", 00:21:11.100 "is_configured": true, 00:21:11.100 "data_offset": 0, 00:21:11.100 "data_size": 65536 00:21:11.100 }, 00:21:11.100 { 00:21:11.100 "name": "BaseBdev3", 00:21:11.100 "uuid": "e07eef8d-5bd7-44d8-93b1-d2889e9c7157", 00:21:11.100 "is_configured": true, 00:21:11.100 "data_offset": 0, 00:21:11.100 "data_size": 65536 00:21:11.100 }, 00:21:11.100 { 00:21:11.100 "name": "BaseBdev4", 00:21:11.100 "uuid": "c4e1f9d5-3ec6-4edf-b13f-64d5528dde02", 00:21:11.100 "is_configured": true, 00:21:11.100 "data_offset": 0, 00:21:11.100 "data_size": 65536 00:21:11.100 } 00:21:11.100 ] 00:21:11.100 }' 00:21:11.100 18:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:11.100 18:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.667 18:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.667 18:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:11.924 18:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:21:11.924 18:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.924 18:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:12.182 18:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u b008c28a-593b-41d8-a5ff-7007693041b1 00:21:12.440 [2024-07-15 18:35:57.889043] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:12.440 [2024-07-15 18:35:57.889081] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xebac00 00:21:12.440 [2024-07-15 18:35:57.889087] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:12.440 [2024-07-15 18:35:57.889286] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x106a5e0 00:21:12.440 [2024-07-15 18:35:57.889415] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xebac00 00:21:12.440 [2024-07-15 18:35:57.889423] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xebac00 00:21:12.440 [2024-07-15 18:35:57.889583] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:12.440 NewBaseBdev 00:21:12.440 18:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:21:12.440 18:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:21:12.441 18:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:12.441 18:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:12.441 18:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:12.441 18:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:12.441 18:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:12.699 18:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:12.957 [ 00:21:12.957 { 00:21:12.957 "name": "NewBaseBdev", 00:21:12.957 "aliases": [ 00:21:12.957 "b008c28a-593b-41d8-a5ff-7007693041b1" 00:21:12.957 ], 00:21:12.957 "product_name": "Malloc disk", 00:21:12.957 "block_size": 512, 00:21:12.957 "num_blocks": 65536, 00:21:12.957 "uuid": "b008c28a-593b-41d8-a5ff-7007693041b1", 00:21:12.957 "assigned_rate_limits": { 00:21:12.957 "rw_ios_per_sec": 0, 00:21:12.957 "rw_mbytes_per_sec": 0, 00:21:12.957 "r_mbytes_per_sec": 0, 00:21:12.957 "w_mbytes_per_sec": 0 00:21:12.957 }, 00:21:12.957 "claimed": true, 00:21:12.957 "claim_type": "exclusive_write", 00:21:12.957 "zoned": false, 00:21:12.957 "supported_io_types": { 00:21:12.957 "read": true, 00:21:12.957 "write": true, 00:21:12.957 "unmap": true, 00:21:12.957 "flush": true, 00:21:12.957 "reset": true, 00:21:12.957 "nvme_admin": false, 00:21:12.957 "nvme_io": false, 00:21:12.957 "nvme_io_md": false, 00:21:12.957 "write_zeroes": true, 00:21:12.957 "zcopy": true, 00:21:12.957 "get_zone_info": false, 00:21:12.957 "zone_management": false, 00:21:12.957 "zone_append": false, 00:21:12.957 "compare": false, 00:21:12.957 "compare_and_write": false, 00:21:12.957 "abort": true, 00:21:12.957 "seek_hole": false, 00:21:12.957 "seek_data": false, 00:21:12.957 "copy": true, 00:21:12.957 "nvme_iov_md": false 00:21:12.957 }, 00:21:12.957 "memory_domains": [ 00:21:12.957 { 00:21:12.957 "dma_device_id": "system", 00:21:12.957 "dma_device_type": 1 00:21:12.957 }, 00:21:12.957 { 00:21:12.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.957 "dma_device_type": 2 00:21:12.957 } 00:21:12.957 ], 00:21:12.957 "driver_specific": {} 00:21:12.957 } 00:21:12.957 ] 00:21:12.957 18:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:12.957 18:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:21:12.957 18:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:12.957 18:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:12.957 18:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:12.957 18:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:12.957 18:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:12.957 18:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:12.957 18:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:12.957 18:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:12.957 18:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:12.957 18:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.957 18:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:13.216 18:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:13.216 "name": "Existed_Raid", 00:21:13.216 "uuid": "6906708b-ea79-4756-9ee7-e7abc6c69dc2", 00:21:13.216 "strip_size_kb": 0, 00:21:13.216 "state": "online", 00:21:13.216 "raid_level": "raid1", 00:21:13.216 "superblock": false, 00:21:13.216 "num_base_bdevs": 4, 00:21:13.216 "num_base_bdevs_discovered": 4, 00:21:13.216 "num_base_bdevs_operational": 4, 00:21:13.216 "base_bdevs_list": [ 00:21:13.216 { 00:21:13.216 "name": "NewBaseBdev", 00:21:13.216 "uuid": "b008c28a-593b-41d8-a5ff-7007693041b1", 00:21:13.216 "is_configured": true, 00:21:13.216 "data_offset": 0, 00:21:13.216 "data_size": 65536 00:21:13.216 }, 00:21:13.216 { 00:21:13.216 "name": "BaseBdev2", 00:21:13.216 "uuid": "a7c6ace8-426e-464f-9cf3-2a745a95f601", 00:21:13.216 "is_configured": true, 00:21:13.216 "data_offset": 0, 00:21:13.216 "data_size": 65536 00:21:13.216 }, 00:21:13.216 { 00:21:13.216 "name": "BaseBdev3", 00:21:13.216 "uuid": "e07eef8d-5bd7-44d8-93b1-d2889e9c7157", 00:21:13.216 "is_configured": true, 00:21:13.216 "data_offset": 0, 00:21:13.216 "data_size": 65536 00:21:13.216 }, 00:21:13.216 { 00:21:13.216 "name": "BaseBdev4", 00:21:13.216 "uuid": "c4e1f9d5-3ec6-4edf-b13f-64d5528dde02", 00:21:13.216 "is_configured": true, 00:21:13.216 "data_offset": 0, 00:21:13.216 "data_size": 65536 00:21:13.216 } 00:21:13.216 ] 00:21:13.216 }' 00:21:13.216 18:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:13.216 18:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.784 18:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:21:13.784 18:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:13.784 18:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:13.784 18:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:13.784 18:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:13.784 18:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:13.784 18:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:13.784 18:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:14.041 [2024-07-15 18:35:59.541993] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:14.041 18:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:14.041 "name": "Existed_Raid", 00:21:14.041 "aliases": [ 00:21:14.041 "6906708b-ea79-4756-9ee7-e7abc6c69dc2" 00:21:14.041 ], 00:21:14.041 "product_name": "Raid Volume", 00:21:14.041 "block_size": 512, 00:21:14.041 "num_blocks": 65536, 00:21:14.041 "uuid": "6906708b-ea79-4756-9ee7-e7abc6c69dc2", 00:21:14.041 "assigned_rate_limits": { 00:21:14.041 "rw_ios_per_sec": 0, 00:21:14.041 "rw_mbytes_per_sec": 0, 00:21:14.041 "r_mbytes_per_sec": 0, 00:21:14.041 "w_mbytes_per_sec": 0 00:21:14.041 }, 00:21:14.041 "claimed": false, 00:21:14.041 "zoned": false, 00:21:14.041 "supported_io_types": { 00:21:14.041 "read": true, 00:21:14.041 "write": true, 00:21:14.041 "unmap": false, 00:21:14.041 "flush": false, 00:21:14.041 "reset": true, 00:21:14.041 "nvme_admin": false, 00:21:14.041 "nvme_io": false, 00:21:14.041 "nvme_io_md": false, 00:21:14.041 "write_zeroes": true, 00:21:14.041 "zcopy": false, 00:21:14.041 "get_zone_info": false, 00:21:14.041 "zone_management": false, 00:21:14.041 "zone_append": false, 00:21:14.041 "compare": false, 00:21:14.041 "compare_and_write": false, 00:21:14.041 "abort": false, 00:21:14.041 "seek_hole": false, 00:21:14.041 "seek_data": false, 00:21:14.041 "copy": false, 00:21:14.041 "nvme_iov_md": false 00:21:14.041 }, 00:21:14.041 "memory_domains": [ 00:21:14.041 { 00:21:14.041 "dma_device_id": "system", 00:21:14.041 "dma_device_type": 1 00:21:14.041 }, 00:21:14.041 { 00:21:14.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.041 "dma_device_type": 2 00:21:14.041 }, 00:21:14.041 { 00:21:14.041 "dma_device_id": "system", 00:21:14.041 "dma_device_type": 1 00:21:14.041 }, 00:21:14.041 { 00:21:14.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.041 "dma_device_type": 2 00:21:14.041 }, 00:21:14.041 { 00:21:14.041 "dma_device_id": "system", 00:21:14.041 "dma_device_type": 1 00:21:14.041 }, 00:21:14.041 { 00:21:14.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.041 "dma_device_type": 2 00:21:14.041 }, 00:21:14.041 { 00:21:14.041 "dma_device_id": "system", 00:21:14.041 "dma_device_type": 1 00:21:14.041 }, 00:21:14.041 { 00:21:14.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.041 "dma_device_type": 2 00:21:14.041 } 00:21:14.041 ], 00:21:14.041 "driver_specific": { 00:21:14.041 "raid": { 00:21:14.041 "uuid": "6906708b-ea79-4756-9ee7-e7abc6c69dc2", 00:21:14.041 "strip_size_kb": 0, 00:21:14.041 "state": "online", 00:21:14.041 "raid_level": "raid1", 00:21:14.041 "superblock": false, 00:21:14.041 "num_base_bdevs": 4, 00:21:14.041 "num_base_bdevs_discovered": 4, 00:21:14.041 "num_base_bdevs_operational": 4, 00:21:14.041 "base_bdevs_list": [ 00:21:14.041 { 00:21:14.041 "name": "NewBaseBdev", 00:21:14.041 "uuid": "b008c28a-593b-41d8-a5ff-7007693041b1", 00:21:14.041 "is_configured": true, 00:21:14.041 "data_offset": 0, 00:21:14.041 "data_size": 65536 00:21:14.041 }, 00:21:14.041 { 00:21:14.041 "name": "BaseBdev2", 00:21:14.041 "uuid": "a7c6ace8-426e-464f-9cf3-2a745a95f601", 00:21:14.041 "is_configured": true, 00:21:14.041 "data_offset": 0, 00:21:14.041 "data_size": 65536 00:21:14.041 }, 00:21:14.041 { 00:21:14.041 "name": "BaseBdev3", 00:21:14.041 "uuid": "e07eef8d-5bd7-44d8-93b1-d2889e9c7157", 00:21:14.041 "is_configured": true, 00:21:14.041 "data_offset": 0, 00:21:14.041 "data_size": 65536 00:21:14.041 }, 00:21:14.041 { 00:21:14.041 "name": "BaseBdev4", 00:21:14.041 "uuid": "c4e1f9d5-3ec6-4edf-b13f-64d5528dde02", 00:21:14.041 "is_configured": true, 00:21:14.041 "data_offset": 0, 00:21:14.041 "data_size": 65536 00:21:14.041 } 00:21:14.042 ] 00:21:14.042 } 00:21:14.042 } 00:21:14.042 }' 00:21:14.042 18:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:14.299 18:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:21:14.299 BaseBdev2 00:21:14.299 BaseBdev3 00:21:14.299 BaseBdev4' 00:21:14.299 18:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:14.299 18:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:21:14.299 18:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:14.557 18:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:14.557 "name": "NewBaseBdev", 00:21:14.557 "aliases": [ 00:21:14.557 "b008c28a-593b-41d8-a5ff-7007693041b1" 00:21:14.557 ], 00:21:14.557 "product_name": "Malloc disk", 00:21:14.557 "block_size": 512, 00:21:14.557 "num_blocks": 65536, 00:21:14.557 "uuid": "b008c28a-593b-41d8-a5ff-7007693041b1", 00:21:14.557 "assigned_rate_limits": { 00:21:14.557 "rw_ios_per_sec": 0, 00:21:14.557 "rw_mbytes_per_sec": 0, 00:21:14.557 "r_mbytes_per_sec": 0, 00:21:14.557 "w_mbytes_per_sec": 0 00:21:14.557 }, 00:21:14.557 "claimed": true, 00:21:14.557 "claim_type": "exclusive_write", 00:21:14.557 "zoned": false, 00:21:14.557 "supported_io_types": { 00:21:14.557 "read": true, 00:21:14.557 "write": true, 00:21:14.557 "unmap": true, 00:21:14.557 "flush": true, 00:21:14.557 "reset": true, 00:21:14.557 "nvme_admin": false, 00:21:14.557 "nvme_io": false, 00:21:14.557 "nvme_io_md": false, 00:21:14.557 "write_zeroes": true, 00:21:14.557 "zcopy": true, 00:21:14.557 "get_zone_info": false, 00:21:14.557 "zone_management": false, 00:21:14.557 "zone_append": false, 00:21:14.557 "compare": false, 00:21:14.557 "compare_and_write": false, 00:21:14.557 "abort": true, 00:21:14.557 "seek_hole": false, 00:21:14.557 "seek_data": false, 00:21:14.557 "copy": true, 00:21:14.557 "nvme_iov_md": false 00:21:14.557 }, 00:21:14.557 "memory_domains": [ 00:21:14.557 { 00:21:14.557 "dma_device_id": "system", 00:21:14.557 "dma_device_type": 1 00:21:14.557 }, 00:21:14.557 { 00:21:14.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.557 "dma_device_type": 2 00:21:14.557 } 00:21:14.557 ], 00:21:14.557 "driver_specific": {} 00:21:14.557 }' 00:21:14.557 18:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:14.557 18:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:14.557 18:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:14.557 18:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:14.557 18:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:14.557 18:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:14.557 18:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:14.557 18:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:14.815 18:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:14.815 18:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:14.815 18:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:14.815 18:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:14.815 18:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:14.815 18:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:14.815 18:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:15.073 18:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:15.073 "name": "BaseBdev2", 00:21:15.073 "aliases": [ 00:21:15.073 "a7c6ace8-426e-464f-9cf3-2a745a95f601" 00:21:15.073 ], 00:21:15.073 "product_name": "Malloc disk", 00:21:15.073 "block_size": 512, 00:21:15.073 "num_blocks": 65536, 00:21:15.073 "uuid": "a7c6ace8-426e-464f-9cf3-2a745a95f601", 00:21:15.073 "assigned_rate_limits": { 00:21:15.073 "rw_ios_per_sec": 0, 00:21:15.073 "rw_mbytes_per_sec": 0, 00:21:15.073 "r_mbytes_per_sec": 0, 00:21:15.073 "w_mbytes_per_sec": 0 00:21:15.073 }, 00:21:15.073 "claimed": true, 00:21:15.073 "claim_type": "exclusive_write", 00:21:15.073 "zoned": false, 00:21:15.073 "supported_io_types": { 00:21:15.073 "read": true, 00:21:15.073 "write": true, 00:21:15.073 "unmap": true, 00:21:15.073 "flush": true, 00:21:15.073 "reset": true, 00:21:15.073 "nvme_admin": false, 00:21:15.073 "nvme_io": false, 00:21:15.073 "nvme_io_md": false, 00:21:15.073 "write_zeroes": true, 00:21:15.073 "zcopy": true, 00:21:15.073 "get_zone_info": false, 00:21:15.073 "zone_management": false, 00:21:15.073 "zone_append": false, 00:21:15.073 "compare": false, 00:21:15.073 "compare_and_write": false, 00:21:15.073 "abort": true, 00:21:15.073 "seek_hole": false, 00:21:15.073 "seek_data": false, 00:21:15.073 "copy": true, 00:21:15.073 "nvme_iov_md": false 00:21:15.073 }, 00:21:15.073 "memory_domains": [ 00:21:15.073 { 00:21:15.073 "dma_device_id": "system", 00:21:15.073 "dma_device_type": 1 00:21:15.073 }, 00:21:15.073 { 00:21:15.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.073 "dma_device_type": 2 00:21:15.073 } 00:21:15.073 ], 00:21:15.073 "driver_specific": {} 00:21:15.073 }' 00:21:15.073 18:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:15.073 18:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:15.331 18:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:15.331 18:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:15.331 18:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:15.331 18:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:15.331 18:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:15.332 18:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:15.332 18:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:15.332 18:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:15.332 18:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:15.589 18:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:15.589 18:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:15.589 18:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:15.589 18:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:15.848 18:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:15.848 "name": "BaseBdev3", 00:21:15.848 "aliases": [ 00:21:15.848 "e07eef8d-5bd7-44d8-93b1-d2889e9c7157" 00:21:15.848 ], 00:21:15.848 "product_name": "Malloc disk", 00:21:15.848 "block_size": 512, 00:21:15.848 "num_blocks": 65536, 00:21:15.848 "uuid": "e07eef8d-5bd7-44d8-93b1-d2889e9c7157", 00:21:15.848 "assigned_rate_limits": { 00:21:15.848 "rw_ios_per_sec": 0, 00:21:15.848 "rw_mbytes_per_sec": 0, 00:21:15.848 "r_mbytes_per_sec": 0, 00:21:15.848 "w_mbytes_per_sec": 0 00:21:15.848 }, 00:21:15.848 "claimed": true, 00:21:15.848 "claim_type": "exclusive_write", 00:21:15.848 "zoned": false, 00:21:15.848 "supported_io_types": { 00:21:15.848 "read": true, 00:21:15.848 "write": true, 00:21:15.848 "unmap": true, 00:21:15.848 "flush": true, 00:21:15.848 "reset": true, 00:21:15.848 "nvme_admin": false, 00:21:15.848 "nvme_io": false, 00:21:15.848 "nvme_io_md": false, 00:21:15.848 "write_zeroes": true, 00:21:15.848 "zcopy": true, 00:21:15.848 "get_zone_info": false, 00:21:15.848 "zone_management": false, 00:21:15.848 "zone_append": false, 00:21:15.848 "compare": false, 00:21:15.848 "compare_and_write": false, 00:21:15.848 "abort": true, 00:21:15.848 "seek_hole": false, 00:21:15.848 "seek_data": false, 00:21:15.848 "copy": true, 00:21:15.848 "nvme_iov_md": false 00:21:15.848 }, 00:21:15.848 "memory_domains": [ 00:21:15.848 { 00:21:15.848 "dma_device_id": "system", 00:21:15.848 "dma_device_type": 1 00:21:15.848 }, 00:21:15.848 { 00:21:15.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.848 "dma_device_type": 2 00:21:15.848 } 00:21:15.848 ], 00:21:15.848 "driver_specific": {} 00:21:15.848 }' 00:21:15.848 18:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:15.848 18:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:15.848 18:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:15.848 18:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:15.848 18:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:15.848 18:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:15.848 18:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:15.848 18:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:16.106 18:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:16.106 18:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:16.106 18:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:16.106 18:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:16.106 18:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:16.106 18:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:16.106 18:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:16.365 18:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:16.365 "name": "BaseBdev4", 00:21:16.365 "aliases": [ 00:21:16.365 "c4e1f9d5-3ec6-4edf-b13f-64d5528dde02" 00:21:16.365 ], 00:21:16.365 "product_name": "Malloc disk", 00:21:16.365 "block_size": 512, 00:21:16.365 "num_blocks": 65536, 00:21:16.365 "uuid": "c4e1f9d5-3ec6-4edf-b13f-64d5528dde02", 00:21:16.365 "assigned_rate_limits": { 00:21:16.365 "rw_ios_per_sec": 0, 00:21:16.365 "rw_mbytes_per_sec": 0, 00:21:16.365 "r_mbytes_per_sec": 0, 00:21:16.365 "w_mbytes_per_sec": 0 00:21:16.365 }, 00:21:16.365 "claimed": true, 00:21:16.365 "claim_type": "exclusive_write", 00:21:16.365 "zoned": false, 00:21:16.365 "supported_io_types": { 00:21:16.365 "read": true, 00:21:16.365 "write": true, 00:21:16.365 "unmap": true, 00:21:16.365 "flush": true, 00:21:16.365 "reset": true, 00:21:16.365 "nvme_admin": false, 00:21:16.365 "nvme_io": false, 00:21:16.365 "nvme_io_md": false, 00:21:16.365 "write_zeroes": true, 00:21:16.365 "zcopy": true, 00:21:16.365 "get_zone_info": false, 00:21:16.365 "zone_management": false, 00:21:16.365 "zone_append": false, 00:21:16.365 "compare": false, 00:21:16.365 "compare_and_write": false, 00:21:16.365 "abort": true, 00:21:16.365 "seek_hole": false, 00:21:16.365 "seek_data": false, 00:21:16.365 "copy": true, 00:21:16.365 "nvme_iov_md": false 00:21:16.365 }, 00:21:16.365 "memory_domains": [ 00:21:16.365 { 00:21:16.365 "dma_device_id": "system", 00:21:16.365 "dma_device_type": 1 00:21:16.365 }, 00:21:16.365 { 00:21:16.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:16.365 "dma_device_type": 2 00:21:16.365 } 00:21:16.365 ], 00:21:16.365 "driver_specific": {} 00:21:16.365 }' 00:21:16.365 18:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:16.365 18:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:16.365 18:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:16.365 18:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:16.623 18:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:16.623 18:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:16.623 18:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:16.623 18:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:16.623 18:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:16.623 18:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:16.623 18:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:16.623 18:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:16.623 18:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:17.189 [2024-07-15 18:36:02.634023] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:17.189 [2024-07-15 18:36:02.634048] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:17.190 [2024-07-15 18:36:02.634099] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:17.190 [2024-07-15 18:36:02.634376] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:17.190 [2024-07-15 18:36:02.634387] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xebac00 name Existed_Raid, state offline 00:21:17.190 18:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2868069 00:21:17.190 18:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2868069 ']' 00:21:17.190 18:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2868069 00:21:17.190 18:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:21:17.190 18:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:17.190 18:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2868069 00:21:17.190 18:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:17.190 18:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:17.190 18:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2868069' 00:21:17.190 killing process with pid 2868069 00:21:17.190 18:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2868069 00:21:17.190 [2024-07-15 18:36:02.710581] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:17.190 18:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2868069 00:21:17.449 [2024-07-15 18:36:02.746581] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:21:17.449 00:21:17.449 real 0m36.798s 00:21:17.449 user 1m9.331s 00:21:17.449 sys 0m4.881s 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.449 ************************************ 00:21:17.449 END TEST raid_state_function_test 00:21:17.449 ************************************ 00:21:17.449 18:36:02 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:17.449 18:36:02 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:21:17.449 18:36:02 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:17.449 18:36:02 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:17.449 18:36:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:17.449 ************************************ 00:21:17.449 START TEST raid_state_function_test_sb 00:21:17.449 ************************************ 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 true 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2874598 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2874598' 00:21:17.449 Process raid pid: 2874598 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2874598 /var/tmp/spdk-raid.sock 00:21:17.449 18:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2874598 ']' 00:21:17.708 18:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:17.708 18:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:17.708 18:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:17.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:17.708 18:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:17.708 18:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.708 [2024-07-15 18:36:03.056161] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:21:17.708 [2024-07-15 18:36:03.056223] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.708 [2024-07-15 18:36:03.158187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.708 [2024-07-15 18:36:03.248780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.965 [2024-07-15 18:36:03.307708] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:17.965 [2024-07-15 18:36:03.307752] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:18.532 18:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:18.532 18:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:21:18.532 18:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:18.790 [2024-07-15 18:36:04.246617] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:18.790 [2024-07-15 18:36:04.246657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:18.790 [2024-07-15 18:36:04.246666] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:18.790 [2024-07-15 18:36:04.246675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:18.790 [2024-07-15 18:36:04.246682] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:18.790 [2024-07-15 18:36:04.246689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:18.790 [2024-07-15 18:36:04.246696] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:18.790 [2024-07-15 18:36:04.246704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:18.790 18:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:18.790 18:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:18.790 18:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:18.790 18:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:18.790 18:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:18.790 18:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:18.790 18:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:18.790 18:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:18.790 18:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:18.790 18:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:18.790 18:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.790 18:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:19.048 18:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:19.048 "name": "Existed_Raid", 00:21:19.048 "uuid": "d390e744-5c36-4f21-a2f5-64a8de0de2c4", 00:21:19.048 "strip_size_kb": 0, 00:21:19.048 "state": "configuring", 00:21:19.048 "raid_level": "raid1", 00:21:19.048 "superblock": true, 00:21:19.048 "num_base_bdevs": 4, 00:21:19.048 "num_base_bdevs_discovered": 0, 00:21:19.048 "num_base_bdevs_operational": 4, 00:21:19.048 "base_bdevs_list": [ 00:21:19.048 { 00:21:19.048 "name": "BaseBdev1", 00:21:19.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.048 "is_configured": false, 00:21:19.048 "data_offset": 0, 00:21:19.048 "data_size": 0 00:21:19.048 }, 00:21:19.048 { 00:21:19.048 "name": "BaseBdev2", 00:21:19.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.048 "is_configured": false, 00:21:19.048 "data_offset": 0, 00:21:19.048 "data_size": 0 00:21:19.048 }, 00:21:19.048 { 00:21:19.048 "name": "BaseBdev3", 00:21:19.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.048 "is_configured": false, 00:21:19.048 "data_offset": 0, 00:21:19.048 "data_size": 0 00:21:19.048 }, 00:21:19.048 { 00:21:19.048 "name": "BaseBdev4", 00:21:19.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.048 "is_configured": false, 00:21:19.048 "data_offset": 0, 00:21:19.048 "data_size": 0 00:21:19.048 } 00:21:19.048 ] 00:21:19.048 }' 00:21:19.048 18:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:19.048 18:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:19.613 18:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:19.870 [2024-07-15 18:36:05.377489] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:19.870 [2024-07-15 18:36:05.377518] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1677bc0 name Existed_Raid, state configuring 00:21:19.870 18:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:20.128 [2024-07-15 18:36:05.634196] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:20.128 [2024-07-15 18:36:05.634222] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:20.128 [2024-07-15 18:36:05.634230] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:20.128 [2024-07-15 18:36:05.634238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:20.128 [2024-07-15 18:36:05.634244] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:20.128 [2024-07-15 18:36:05.634252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:20.128 [2024-07-15 18:36:05.634259] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:20.128 [2024-07-15 18:36:05.634267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:20.128 18:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:20.386 [2024-07-15 18:36:05.900279] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:20.386 BaseBdev1 00:21:20.386 18:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:20.386 18:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:20.386 18:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:20.386 18:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:20.386 18:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:20.386 18:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:20.386 18:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:20.643 18:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:20.900 [ 00:21:20.900 { 00:21:20.900 "name": "BaseBdev1", 00:21:20.900 "aliases": [ 00:21:20.900 "4cd79a0a-047d-4dcd-bef8-53b214bf510a" 00:21:20.900 ], 00:21:20.900 "product_name": "Malloc disk", 00:21:20.900 "block_size": 512, 00:21:20.900 "num_blocks": 65536, 00:21:20.900 "uuid": "4cd79a0a-047d-4dcd-bef8-53b214bf510a", 00:21:20.900 "assigned_rate_limits": { 00:21:20.900 "rw_ios_per_sec": 0, 00:21:20.900 "rw_mbytes_per_sec": 0, 00:21:20.900 "r_mbytes_per_sec": 0, 00:21:20.900 "w_mbytes_per_sec": 0 00:21:20.900 }, 00:21:20.900 "claimed": true, 00:21:20.900 "claim_type": "exclusive_write", 00:21:20.900 "zoned": false, 00:21:20.900 "supported_io_types": { 00:21:20.900 "read": true, 00:21:20.900 "write": true, 00:21:20.900 "unmap": true, 00:21:20.900 "flush": true, 00:21:20.900 "reset": true, 00:21:20.900 "nvme_admin": false, 00:21:20.900 "nvme_io": false, 00:21:20.900 "nvme_io_md": false, 00:21:20.900 "write_zeroes": true, 00:21:20.900 "zcopy": true, 00:21:20.900 "get_zone_info": false, 00:21:20.900 "zone_management": false, 00:21:20.900 "zone_append": false, 00:21:20.900 "compare": false, 00:21:20.900 "compare_and_write": false, 00:21:20.900 "abort": true, 00:21:20.900 "seek_hole": false, 00:21:20.900 "seek_data": false, 00:21:20.900 "copy": true, 00:21:20.900 "nvme_iov_md": false 00:21:20.900 }, 00:21:20.900 "memory_domains": [ 00:21:20.900 { 00:21:20.900 "dma_device_id": "system", 00:21:20.900 "dma_device_type": 1 00:21:20.900 }, 00:21:20.900 { 00:21:20.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:20.900 "dma_device_type": 2 00:21:20.900 } 00:21:20.900 ], 00:21:20.900 "driver_specific": {} 00:21:20.900 } 00:21:20.900 ] 00:21:20.900 18:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:20.900 18:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:20.900 18:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:20.900 18:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:20.900 18:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:20.900 18:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:20.900 18:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:20.900 18:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:20.900 18:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:20.900 18:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:20.900 18:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:20.900 18:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.900 18:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:21.173 18:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:21.173 "name": "Existed_Raid", 00:21:21.173 "uuid": "cd50ca1c-cab3-41aa-9681-8650b7d9d5c7", 00:21:21.173 "strip_size_kb": 0, 00:21:21.173 "state": "configuring", 00:21:21.173 "raid_level": "raid1", 00:21:21.173 "superblock": true, 00:21:21.173 "num_base_bdevs": 4, 00:21:21.173 "num_base_bdevs_discovered": 1, 00:21:21.173 "num_base_bdevs_operational": 4, 00:21:21.173 "base_bdevs_list": [ 00:21:21.173 { 00:21:21.173 "name": "BaseBdev1", 00:21:21.173 "uuid": "4cd79a0a-047d-4dcd-bef8-53b214bf510a", 00:21:21.173 "is_configured": true, 00:21:21.173 "data_offset": 2048, 00:21:21.173 "data_size": 63488 00:21:21.174 }, 00:21:21.174 { 00:21:21.174 "name": "BaseBdev2", 00:21:21.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.174 "is_configured": false, 00:21:21.174 "data_offset": 0, 00:21:21.174 "data_size": 0 00:21:21.174 }, 00:21:21.174 { 00:21:21.174 "name": "BaseBdev3", 00:21:21.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.174 "is_configured": false, 00:21:21.174 "data_offset": 0, 00:21:21.174 "data_size": 0 00:21:21.174 }, 00:21:21.174 { 00:21:21.174 "name": "BaseBdev4", 00:21:21.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.174 "is_configured": false, 00:21:21.174 "data_offset": 0, 00:21:21.174 "data_size": 0 00:21:21.174 } 00:21:21.174 ] 00:21:21.174 }' 00:21:21.174 18:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:21.174 18:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:22.110 18:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:22.110 [2024-07-15 18:36:07.560746] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:22.110 [2024-07-15 18:36:07.560787] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1677430 name Existed_Raid, state configuring 00:21:22.110 18:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:22.367 [2024-07-15 18:36:07.817475] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:22.367 [2024-07-15 18:36:07.818997] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:22.367 [2024-07-15 18:36:07.819028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:22.367 [2024-07-15 18:36:07.819036] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:22.367 [2024-07-15 18:36:07.819045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:22.367 [2024-07-15 18:36:07.819052] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:22.367 [2024-07-15 18:36:07.819059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:22.367 18:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:22.367 18:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:22.367 18:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:22.367 18:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:22.367 18:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:22.367 18:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:22.367 18:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:22.367 18:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:22.367 18:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:22.367 18:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:22.367 18:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:22.367 18:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:22.367 18:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.367 18:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:22.624 18:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:22.624 "name": "Existed_Raid", 00:21:22.624 "uuid": "cf3fc7d5-c6ba-4c48-a3f8-069378a1be10", 00:21:22.624 "strip_size_kb": 0, 00:21:22.624 "state": "configuring", 00:21:22.624 "raid_level": "raid1", 00:21:22.624 "superblock": true, 00:21:22.624 "num_base_bdevs": 4, 00:21:22.624 "num_base_bdevs_discovered": 1, 00:21:22.624 "num_base_bdevs_operational": 4, 00:21:22.624 "base_bdevs_list": [ 00:21:22.624 { 00:21:22.624 "name": "BaseBdev1", 00:21:22.624 "uuid": "4cd79a0a-047d-4dcd-bef8-53b214bf510a", 00:21:22.624 "is_configured": true, 00:21:22.624 "data_offset": 2048, 00:21:22.624 "data_size": 63488 00:21:22.624 }, 00:21:22.624 { 00:21:22.624 "name": "BaseBdev2", 00:21:22.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.624 "is_configured": false, 00:21:22.624 "data_offset": 0, 00:21:22.624 "data_size": 0 00:21:22.624 }, 00:21:22.624 { 00:21:22.624 "name": "BaseBdev3", 00:21:22.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.624 "is_configured": false, 00:21:22.624 "data_offset": 0, 00:21:22.624 "data_size": 0 00:21:22.624 }, 00:21:22.624 { 00:21:22.624 "name": "BaseBdev4", 00:21:22.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.624 "is_configured": false, 00:21:22.624 "data_offset": 0, 00:21:22.624 "data_size": 0 00:21:22.624 } 00:21:22.624 ] 00:21:22.624 }' 00:21:22.624 18:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:22.624 18:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.190 18:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:23.450 [2024-07-15 18:36:08.955870] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:23.450 BaseBdev2 00:21:23.450 18:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:23.450 18:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:23.450 18:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:23.450 18:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:23.450 18:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:23.450 18:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:23.450 18:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:23.709 18:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:23.967 [ 00:21:23.967 { 00:21:23.967 "name": "BaseBdev2", 00:21:23.967 "aliases": [ 00:21:23.967 "454b8270-4c41-4185-8600-d6d9961ec95e" 00:21:23.967 ], 00:21:23.967 "product_name": "Malloc disk", 00:21:23.967 "block_size": 512, 00:21:23.967 "num_blocks": 65536, 00:21:23.967 "uuid": "454b8270-4c41-4185-8600-d6d9961ec95e", 00:21:23.967 "assigned_rate_limits": { 00:21:23.967 "rw_ios_per_sec": 0, 00:21:23.967 "rw_mbytes_per_sec": 0, 00:21:23.967 "r_mbytes_per_sec": 0, 00:21:23.967 "w_mbytes_per_sec": 0 00:21:23.967 }, 00:21:23.967 "claimed": true, 00:21:23.967 "claim_type": "exclusive_write", 00:21:23.967 "zoned": false, 00:21:23.967 "supported_io_types": { 00:21:23.967 "read": true, 00:21:23.967 "write": true, 00:21:23.967 "unmap": true, 00:21:23.967 "flush": true, 00:21:23.967 "reset": true, 00:21:23.967 "nvme_admin": false, 00:21:23.967 "nvme_io": false, 00:21:23.967 "nvme_io_md": false, 00:21:23.967 "write_zeroes": true, 00:21:23.967 "zcopy": true, 00:21:23.967 "get_zone_info": false, 00:21:23.967 "zone_management": false, 00:21:23.967 "zone_append": false, 00:21:23.967 "compare": false, 00:21:23.967 "compare_and_write": false, 00:21:23.967 "abort": true, 00:21:23.967 "seek_hole": false, 00:21:23.967 "seek_data": false, 00:21:23.967 "copy": true, 00:21:23.967 "nvme_iov_md": false 00:21:23.967 }, 00:21:23.967 "memory_domains": [ 00:21:23.967 { 00:21:23.967 "dma_device_id": "system", 00:21:23.967 "dma_device_type": 1 00:21:23.967 }, 00:21:23.967 { 00:21:23.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:23.967 "dma_device_type": 2 00:21:23.967 } 00:21:23.967 ], 00:21:23.967 "driver_specific": {} 00:21:23.967 } 00:21:23.967 ] 00:21:23.967 18:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:23.967 18:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:23.967 18:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:23.967 18:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:23.967 18:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:23.967 18:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:23.968 18:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:23.968 18:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:23.968 18:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:23.968 18:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:23.968 18:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:23.968 18:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:23.968 18:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:23.968 18:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.968 18:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:24.226 18:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:24.226 "name": "Existed_Raid", 00:21:24.226 "uuid": "cf3fc7d5-c6ba-4c48-a3f8-069378a1be10", 00:21:24.226 "strip_size_kb": 0, 00:21:24.226 "state": "configuring", 00:21:24.226 "raid_level": "raid1", 00:21:24.226 "superblock": true, 00:21:24.226 "num_base_bdevs": 4, 00:21:24.226 "num_base_bdevs_discovered": 2, 00:21:24.226 "num_base_bdevs_operational": 4, 00:21:24.226 "base_bdevs_list": [ 00:21:24.226 { 00:21:24.226 "name": "BaseBdev1", 00:21:24.226 "uuid": "4cd79a0a-047d-4dcd-bef8-53b214bf510a", 00:21:24.226 "is_configured": true, 00:21:24.226 "data_offset": 2048, 00:21:24.226 "data_size": 63488 00:21:24.226 }, 00:21:24.226 { 00:21:24.226 "name": "BaseBdev2", 00:21:24.226 "uuid": "454b8270-4c41-4185-8600-d6d9961ec95e", 00:21:24.226 "is_configured": true, 00:21:24.226 "data_offset": 2048, 00:21:24.226 "data_size": 63488 00:21:24.226 }, 00:21:24.226 { 00:21:24.226 "name": "BaseBdev3", 00:21:24.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.226 "is_configured": false, 00:21:24.226 "data_offset": 0, 00:21:24.226 "data_size": 0 00:21:24.226 }, 00:21:24.226 { 00:21:24.226 "name": "BaseBdev4", 00:21:24.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.226 "is_configured": false, 00:21:24.226 "data_offset": 0, 00:21:24.226 "data_size": 0 00:21:24.226 } 00:21:24.226 ] 00:21:24.226 }' 00:21:24.226 18:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:24.226 18:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.162 18:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:25.162 [2024-07-15 18:36:10.647603] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:25.162 BaseBdev3 00:21:25.162 18:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:21:25.162 18:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:25.162 18:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:25.162 18:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:25.162 18:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:25.162 18:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:25.162 18:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:25.420 18:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:25.679 [ 00:21:25.679 { 00:21:25.679 "name": "BaseBdev3", 00:21:25.679 "aliases": [ 00:21:25.679 "c3c12465-0ac9-4c1a-90ec-f1cf8698cc80" 00:21:25.679 ], 00:21:25.679 "product_name": "Malloc disk", 00:21:25.679 "block_size": 512, 00:21:25.679 "num_blocks": 65536, 00:21:25.679 "uuid": "c3c12465-0ac9-4c1a-90ec-f1cf8698cc80", 00:21:25.679 "assigned_rate_limits": { 00:21:25.679 "rw_ios_per_sec": 0, 00:21:25.679 "rw_mbytes_per_sec": 0, 00:21:25.679 "r_mbytes_per_sec": 0, 00:21:25.679 "w_mbytes_per_sec": 0 00:21:25.679 }, 00:21:25.679 "claimed": true, 00:21:25.679 "claim_type": "exclusive_write", 00:21:25.679 "zoned": false, 00:21:25.679 "supported_io_types": { 00:21:25.679 "read": true, 00:21:25.679 "write": true, 00:21:25.679 "unmap": true, 00:21:25.679 "flush": true, 00:21:25.679 "reset": true, 00:21:25.679 "nvme_admin": false, 00:21:25.679 "nvme_io": false, 00:21:25.679 "nvme_io_md": false, 00:21:25.679 "write_zeroes": true, 00:21:25.679 "zcopy": true, 00:21:25.679 "get_zone_info": false, 00:21:25.679 "zone_management": false, 00:21:25.679 "zone_append": false, 00:21:25.679 "compare": false, 00:21:25.679 "compare_and_write": false, 00:21:25.679 "abort": true, 00:21:25.679 "seek_hole": false, 00:21:25.679 "seek_data": false, 00:21:25.679 "copy": true, 00:21:25.679 "nvme_iov_md": false 00:21:25.679 }, 00:21:25.679 "memory_domains": [ 00:21:25.679 { 00:21:25.679 "dma_device_id": "system", 00:21:25.679 "dma_device_type": 1 00:21:25.679 }, 00:21:25.679 { 00:21:25.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:25.679 "dma_device_type": 2 00:21:25.679 } 00:21:25.679 ], 00:21:25.679 "driver_specific": {} 00:21:25.679 } 00:21:25.679 ] 00:21:25.679 18:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:25.679 18:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:25.679 18:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:25.679 18:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:25.679 18:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:25.679 18:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:25.679 18:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:25.679 18:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:25.679 18:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:25.679 18:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:25.679 18:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:25.679 18:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:25.679 18:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:25.679 18:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.679 18:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:25.938 18:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:25.938 "name": "Existed_Raid", 00:21:25.938 "uuid": "cf3fc7d5-c6ba-4c48-a3f8-069378a1be10", 00:21:25.938 "strip_size_kb": 0, 00:21:25.938 "state": "configuring", 00:21:25.938 "raid_level": "raid1", 00:21:25.938 "superblock": true, 00:21:25.938 "num_base_bdevs": 4, 00:21:25.938 "num_base_bdevs_discovered": 3, 00:21:25.938 "num_base_bdevs_operational": 4, 00:21:25.938 "base_bdevs_list": [ 00:21:25.938 { 00:21:25.938 "name": "BaseBdev1", 00:21:25.938 "uuid": "4cd79a0a-047d-4dcd-bef8-53b214bf510a", 00:21:25.938 "is_configured": true, 00:21:25.938 "data_offset": 2048, 00:21:25.938 "data_size": 63488 00:21:25.938 }, 00:21:25.938 { 00:21:25.938 "name": "BaseBdev2", 00:21:25.938 "uuid": "454b8270-4c41-4185-8600-d6d9961ec95e", 00:21:25.938 "is_configured": true, 00:21:25.938 "data_offset": 2048, 00:21:25.938 "data_size": 63488 00:21:25.938 }, 00:21:25.938 { 00:21:25.938 "name": "BaseBdev3", 00:21:25.938 "uuid": "c3c12465-0ac9-4c1a-90ec-f1cf8698cc80", 00:21:25.938 "is_configured": true, 00:21:25.938 "data_offset": 2048, 00:21:25.938 "data_size": 63488 00:21:25.938 }, 00:21:25.938 { 00:21:25.938 "name": "BaseBdev4", 00:21:25.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.938 "is_configured": false, 00:21:25.938 "data_offset": 0, 00:21:25.938 "data_size": 0 00:21:25.938 } 00:21:25.938 ] 00:21:25.938 }' 00:21:25.938 18:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:25.938 18:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.504 18:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:26.761 [2024-07-15 18:36:12.287263] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:26.761 [2024-07-15 18:36:12.287434] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1678490 00:21:26.761 [2024-07-15 18:36:12.287447] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:26.761 [2024-07-15 18:36:12.287627] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16642d0 00:21:26.761 [2024-07-15 18:36:12.287759] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1678490 00:21:26.761 [2024-07-15 18:36:12.287771] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1678490 00:21:26.761 [2024-07-15 18:36:12.287871] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:26.761 BaseBdev4 00:21:26.762 18:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:21:26.762 18:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:21:26.762 18:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:26.762 18:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:26.762 18:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:26.762 18:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:26.762 18:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:27.044 18:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:27.329 [ 00:21:27.329 { 00:21:27.329 "name": "BaseBdev4", 00:21:27.329 "aliases": [ 00:21:27.329 "409cf049-c916-413d-a9ab-11c8b134fedd" 00:21:27.329 ], 00:21:27.329 "product_name": "Malloc disk", 00:21:27.329 "block_size": 512, 00:21:27.329 "num_blocks": 65536, 00:21:27.329 "uuid": "409cf049-c916-413d-a9ab-11c8b134fedd", 00:21:27.329 "assigned_rate_limits": { 00:21:27.329 "rw_ios_per_sec": 0, 00:21:27.329 "rw_mbytes_per_sec": 0, 00:21:27.329 "r_mbytes_per_sec": 0, 00:21:27.329 "w_mbytes_per_sec": 0 00:21:27.329 }, 00:21:27.329 "claimed": true, 00:21:27.329 "claim_type": "exclusive_write", 00:21:27.329 "zoned": false, 00:21:27.329 "supported_io_types": { 00:21:27.329 "read": true, 00:21:27.329 "write": true, 00:21:27.329 "unmap": true, 00:21:27.329 "flush": true, 00:21:27.329 "reset": true, 00:21:27.329 "nvme_admin": false, 00:21:27.329 "nvme_io": false, 00:21:27.329 "nvme_io_md": false, 00:21:27.329 "write_zeroes": true, 00:21:27.329 "zcopy": true, 00:21:27.329 "get_zone_info": false, 00:21:27.329 "zone_management": false, 00:21:27.329 "zone_append": false, 00:21:27.329 "compare": false, 00:21:27.329 "compare_and_write": false, 00:21:27.329 "abort": true, 00:21:27.329 "seek_hole": false, 00:21:27.329 "seek_data": false, 00:21:27.329 "copy": true, 00:21:27.329 "nvme_iov_md": false 00:21:27.329 }, 00:21:27.329 "memory_domains": [ 00:21:27.329 { 00:21:27.329 "dma_device_id": "system", 00:21:27.329 "dma_device_type": 1 00:21:27.329 }, 00:21:27.329 { 00:21:27.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:27.329 "dma_device_type": 2 00:21:27.329 } 00:21:27.329 ], 00:21:27.329 "driver_specific": {} 00:21:27.330 } 00:21:27.330 ] 00:21:27.330 18:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:27.330 18:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:27.330 18:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:27.330 18:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:21:27.330 18:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:27.330 18:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:27.330 18:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:27.330 18:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:27.330 18:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:27.330 18:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:27.330 18:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:27.330 18:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:27.330 18:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:27.330 18:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.330 18:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:27.587 18:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:27.588 "name": "Existed_Raid", 00:21:27.588 "uuid": "cf3fc7d5-c6ba-4c48-a3f8-069378a1be10", 00:21:27.588 "strip_size_kb": 0, 00:21:27.588 "state": "online", 00:21:27.588 "raid_level": "raid1", 00:21:27.588 "superblock": true, 00:21:27.588 "num_base_bdevs": 4, 00:21:27.588 "num_base_bdevs_discovered": 4, 00:21:27.588 "num_base_bdevs_operational": 4, 00:21:27.588 "base_bdevs_list": [ 00:21:27.588 { 00:21:27.588 "name": "BaseBdev1", 00:21:27.588 "uuid": "4cd79a0a-047d-4dcd-bef8-53b214bf510a", 00:21:27.588 "is_configured": true, 00:21:27.588 "data_offset": 2048, 00:21:27.588 "data_size": 63488 00:21:27.588 }, 00:21:27.588 { 00:21:27.588 "name": "BaseBdev2", 00:21:27.588 "uuid": "454b8270-4c41-4185-8600-d6d9961ec95e", 00:21:27.588 "is_configured": true, 00:21:27.588 "data_offset": 2048, 00:21:27.588 "data_size": 63488 00:21:27.588 }, 00:21:27.588 { 00:21:27.588 "name": "BaseBdev3", 00:21:27.588 "uuid": "c3c12465-0ac9-4c1a-90ec-f1cf8698cc80", 00:21:27.588 "is_configured": true, 00:21:27.588 "data_offset": 2048, 00:21:27.588 "data_size": 63488 00:21:27.588 }, 00:21:27.588 { 00:21:27.588 "name": "BaseBdev4", 00:21:27.588 "uuid": "409cf049-c916-413d-a9ab-11c8b134fedd", 00:21:27.588 "is_configured": true, 00:21:27.588 "data_offset": 2048, 00:21:27.588 "data_size": 63488 00:21:27.588 } 00:21:27.588 ] 00:21:27.588 }' 00:21:27.588 18:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:27.588 18:36:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.519 18:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:28.519 18:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:28.519 18:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:28.519 18:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:28.519 18:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:28.519 18:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:21:28.519 18:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:28.519 18:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:28.519 [2024-07-15 18:36:13.976186] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:28.519 18:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:28.519 "name": "Existed_Raid", 00:21:28.519 "aliases": [ 00:21:28.519 "cf3fc7d5-c6ba-4c48-a3f8-069378a1be10" 00:21:28.519 ], 00:21:28.519 "product_name": "Raid Volume", 00:21:28.519 "block_size": 512, 00:21:28.519 "num_blocks": 63488, 00:21:28.519 "uuid": "cf3fc7d5-c6ba-4c48-a3f8-069378a1be10", 00:21:28.519 "assigned_rate_limits": { 00:21:28.519 "rw_ios_per_sec": 0, 00:21:28.519 "rw_mbytes_per_sec": 0, 00:21:28.519 "r_mbytes_per_sec": 0, 00:21:28.519 "w_mbytes_per_sec": 0 00:21:28.519 }, 00:21:28.519 "claimed": false, 00:21:28.519 "zoned": false, 00:21:28.519 "supported_io_types": { 00:21:28.519 "read": true, 00:21:28.519 "write": true, 00:21:28.519 "unmap": false, 00:21:28.519 "flush": false, 00:21:28.519 "reset": true, 00:21:28.519 "nvme_admin": false, 00:21:28.519 "nvme_io": false, 00:21:28.519 "nvme_io_md": false, 00:21:28.519 "write_zeroes": true, 00:21:28.519 "zcopy": false, 00:21:28.519 "get_zone_info": false, 00:21:28.519 "zone_management": false, 00:21:28.519 "zone_append": false, 00:21:28.519 "compare": false, 00:21:28.519 "compare_and_write": false, 00:21:28.519 "abort": false, 00:21:28.519 "seek_hole": false, 00:21:28.519 "seek_data": false, 00:21:28.519 "copy": false, 00:21:28.519 "nvme_iov_md": false 00:21:28.519 }, 00:21:28.519 "memory_domains": [ 00:21:28.519 { 00:21:28.519 "dma_device_id": "system", 00:21:28.519 "dma_device_type": 1 00:21:28.519 }, 00:21:28.519 { 00:21:28.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:28.519 "dma_device_type": 2 00:21:28.519 }, 00:21:28.519 { 00:21:28.519 "dma_device_id": "system", 00:21:28.519 "dma_device_type": 1 00:21:28.519 }, 00:21:28.519 { 00:21:28.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:28.519 "dma_device_type": 2 00:21:28.519 }, 00:21:28.519 { 00:21:28.519 "dma_device_id": "system", 00:21:28.519 "dma_device_type": 1 00:21:28.519 }, 00:21:28.519 { 00:21:28.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:28.519 "dma_device_type": 2 00:21:28.519 }, 00:21:28.519 { 00:21:28.519 "dma_device_id": "system", 00:21:28.519 "dma_device_type": 1 00:21:28.519 }, 00:21:28.519 { 00:21:28.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:28.519 "dma_device_type": 2 00:21:28.519 } 00:21:28.519 ], 00:21:28.519 "driver_specific": { 00:21:28.519 "raid": { 00:21:28.519 "uuid": "cf3fc7d5-c6ba-4c48-a3f8-069378a1be10", 00:21:28.519 "strip_size_kb": 0, 00:21:28.519 "state": "online", 00:21:28.519 "raid_level": "raid1", 00:21:28.519 "superblock": true, 00:21:28.519 "num_base_bdevs": 4, 00:21:28.519 "num_base_bdevs_discovered": 4, 00:21:28.519 "num_base_bdevs_operational": 4, 00:21:28.519 "base_bdevs_list": [ 00:21:28.519 { 00:21:28.519 "name": "BaseBdev1", 00:21:28.519 "uuid": "4cd79a0a-047d-4dcd-bef8-53b214bf510a", 00:21:28.519 "is_configured": true, 00:21:28.519 "data_offset": 2048, 00:21:28.519 "data_size": 63488 00:21:28.519 }, 00:21:28.519 { 00:21:28.519 "name": "BaseBdev2", 00:21:28.519 "uuid": "454b8270-4c41-4185-8600-d6d9961ec95e", 00:21:28.519 "is_configured": true, 00:21:28.519 "data_offset": 2048, 00:21:28.519 "data_size": 63488 00:21:28.519 }, 00:21:28.519 { 00:21:28.519 "name": "BaseBdev3", 00:21:28.519 "uuid": "c3c12465-0ac9-4c1a-90ec-f1cf8698cc80", 00:21:28.519 "is_configured": true, 00:21:28.519 "data_offset": 2048, 00:21:28.519 "data_size": 63488 00:21:28.519 }, 00:21:28.519 { 00:21:28.519 "name": "BaseBdev4", 00:21:28.519 "uuid": "409cf049-c916-413d-a9ab-11c8b134fedd", 00:21:28.519 "is_configured": true, 00:21:28.519 "data_offset": 2048, 00:21:28.519 "data_size": 63488 00:21:28.519 } 00:21:28.519 ] 00:21:28.519 } 00:21:28.519 } 00:21:28.519 }' 00:21:28.519 18:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:28.519 18:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:28.519 BaseBdev2 00:21:28.519 BaseBdev3 00:21:28.519 BaseBdev4' 00:21:28.519 18:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:28.519 18:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:28.519 18:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:29.086 18:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:29.086 "name": "BaseBdev1", 00:21:29.086 "aliases": [ 00:21:29.086 "4cd79a0a-047d-4dcd-bef8-53b214bf510a" 00:21:29.086 ], 00:21:29.086 "product_name": "Malloc disk", 00:21:29.086 "block_size": 512, 00:21:29.086 "num_blocks": 65536, 00:21:29.086 "uuid": "4cd79a0a-047d-4dcd-bef8-53b214bf510a", 00:21:29.086 "assigned_rate_limits": { 00:21:29.086 "rw_ios_per_sec": 0, 00:21:29.086 "rw_mbytes_per_sec": 0, 00:21:29.086 "r_mbytes_per_sec": 0, 00:21:29.086 "w_mbytes_per_sec": 0 00:21:29.086 }, 00:21:29.086 "claimed": true, 00:21:29.086 "claim_type": "exclusive_write", 00:21:29.086 "zoned": false, 00:21:29.086 "supported_io_types": { 00:21:29.086 "read": true, 00:21:29.086 "write": true, 00:21:29.086 "unmap": true, 00:21:29.086 "flush": true, 00:21:29.086 "reset": true, 00:21:29.086 "nvme_admin": false, 00:21:29.086 "nvme_io": false, 00:21:29.086 "nvme_io_md": false, 00:21:29.086 "write_zeroes": true, 00:21:29.086 "zcopy": true, 00:21:29.086 "get_zone_info": false, 00:21:29.086 "zone_management": false, 00:21:29.086 "zone_append": false, 00:21:29.086 "compare": false, 00:21:29.086 "compare_and_write": false, 00:21:29.086 "abort": true, 00:21:29.086 "seek_hole": false, 00:21:29.086 "seek_data": false, 00:21:29.086 "copy": true, 00:21:29.086 "nvme_iov_md": false 00:21:29.086 }, 00:21:29.086 "memory_domains": [ 00:21:29.086 { 00:21:29.086 "dma_device_id": "system", 00:21:29.086 "dma_device_type": 1 00:21:29.086 }, 00:21:29.086 { 00:21:29.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:29.086 "dma_device_type": 2 00:21:29.086 } 00:21:29.086 ], 00:21:29.086 "driver_specific": {} 00:21:29.086 }' 00:21:29.086 18:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:29.086 18:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:29.086 18:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:29.086 18:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:29.086 18:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:29.086 18:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:29.086 18:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:29.086 18:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:29.344 18:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:29.344 18:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:29.344 18:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:29.344 18:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:29.344 18:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:29.344 18:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:29.344 18:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:29.603 18:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:29.603 "name": "BaseBdev2", 00:21:29.603 "aliases": [ 00:21:29.603 "454b8270-4c41-4185-8600-d6d9961ec95e" 00:21:29.603 ], 00:21:29.603 "product_name": "Malloc disk", 00:21:29.603 "block_size": 512, 00:21:29.603 "num_blocks": 65536, 00:21:29.603 "uuid": "454b8270-4c41-4185-8600-d6d9961ec95e", 00:21:29.603 "assigned_rate_limits": { 00:21:29.603 "rw_ios_per_sec": 0, 00:21:29.603 "rw_mbytes_per_sec": 0, 00:21:29.603 "r_mbytes_per_sec": 0, 00:21:29.603 "w_mbytes_per_sec": 0 00:21:29.603 }, 00:21:29.603 "claimed": true, 00:21:29.603 "claim_type": "exclusive_write", 00:21:29.603 "zoned": false, 00:21:29.603 "supported_io_types": { 00:21:29.603 "read": true, 00:21:29.603 "write": true, 00:21:29.603 "unmap": true, 00:21:29.603 "flush": true, 00:21:29.603 "reset": true, 00:21:29.603 "nvme_admin": false, 00:21:29.603 "nvme_io": false, 00:21:29.603 "nvme_io_md": false, 00:21:29.603 "write_zeroes": true, 00:21:29.603 "zcopy": true, 00:21:29.603 "get_zone_info": false, 00:21:29.603 "zone_management": false, 00:21:29.603 "zone_append": false, 00:21:29.603 "compare": false, 00:21:29.603 "compare_and_write": false, 00:21:29.603 "abort": true, 00:21:29.603 "seek_hole": false, 00:21:29.603 "seek_data": false, 00:21:29.603 "copy": true, 00:21:29.603 "nvme_iov_md": false 00:21:29.603 }, 00:21:29.603 "memory_domains": [ 00:21:29.603 { 00:21:29.603 "dma_device_id": "system", 00:21:29.603 "dma_device_type": 1 00:21:29.603 }, 00:21:29.603 { 00:21:29.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:29.603 "dma_device_type": 2 00:21:29.603 } 00:21:29.603 ], 00:21:29.603 "driver_specific": {} 00:21:29.603 }' 00:21:29.603 18:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:29.603 18:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:29.603 18:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:29.603 18:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:29.862 18:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:29.862 18:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:29.862 18:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:29.862 18:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:29.862 18:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:29.862 18:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:30.121 18:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:30.121 18:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:30.121 18:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:30.121 18:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:30.121 18:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:30.380 18:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:30.380 "name": "BaseBdev3", 00:21:30.380 "aliases": [ 00:21:30.380 "c3c12465-0ac9-4c1a-90ec-f1cf8698cc80" 00:21:30.380 ], 00:21:30.380 "product_name": "Malloc disk", 00:21:30.380 "block_size": 512, 00:21:30.380 "num_blocks": 65536, 00:21:30.380 "uuid": "c3c12465-0ac9-4c1a-90ec-f1cf8698cc80", 00:21:30.380 "assigned_rate_limits": { 00:21:30.380 "rw_ios_per_sec": 0, 00:21:30.380 "rw_mbytes_per_sec": 0, 00:21:30.380 "r_mbytes_per_sec": 0, 00:21:30.380 "w_mbytes_per_sec": 0 00:21:30.380 }, 00:21:30.380 "claimed": true, 00:21:30.380 "claim_type": "exclusive_write", 00:21:30.380 "zoned": false, 00:21:30.380 "supported_io_types": { 00:21:30.380 "read": true, 00:21:30.380 "write": true, 00:21:30.380 "unmap": true, 00:21:30.380 "flush": true, 00:21:30.380 "reset": true, 00:21:30.380 "nvme_admin": false, 00:21:30.380 "nvme_io": false, 00:21:30.380 "nvme_io_md": false, 00:21:30.380 "write_zeroes": true, 00:21:30.380 "zcopy": true, 00:21:30.380 "get_zone_info": false, 00:21:30.380 "zone_management": false, 00:21:30.380 "zone_append": false, 00:21:30.380 "compare": false, 00:21:30.380 "compare_and_write": false, 00:21:30.380 "abort": true, 00:21:30.380 "seek_hole": false, 00:21:30.380 "seek_data": false, 00:21:30.380 "copy": true, 00:21:30.380 "nvme_iov_md": false 00:21:30.380 }, 00:21:30.380 "memory_domains": [ 00:21:30.380 { 00:21:30.380 "dma_device_id": "system", 00:21:30.380 "dma_device_type": 1 00:21:30.380 }, 00:21:30.380 { 00:21:30.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.380 "dma_device_type": 2 00:21:30.380 } 00:21:30.380 ], 00:21:30.380 "driver_specific": {} 00:21:30.380 }' 00:21:30.380 18:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:30.380 18:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:30.638 18:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:30.638 18:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:30.638 18:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:30.638 18:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:30.638 18:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:30.638 18:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:30.638 18:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:30.638 18:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:30.896 18:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:30.896 18:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:30.896 18:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:30.896 18:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:30.896 18:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:31.155 18:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:31.155 "name": "BaseBdev4", 00:21:31.155 "aliases": [ 00:21:31.155 "409cf049-c916-413d-a9ab-11c8b134fedd" 00:21:31.155 ], 00:21:31.155 "product_name": "Malloc disk", 00:21:31.155 "block_size": 512, 00:21:31.155 "num_blocks": 65536, 00:21:31.155 "uuid": "409cf049-c916-413d-a9ab-11c8b134fedd", 00:21:31.155 "assigned_rate_limits": { 00:21:31.155 "rw_ios_per_sec": 0, 00:21:31.155 "rw_mbytes_per_sec": 0, 00:21:31.155 "r_mbytes_per_sec": 0, 00:21:31.155 "w_mbytes_per_sec": 0 00:21:31.155 }, 00:21:31.155 "claimed": true, 00:21:31.155 "claim_type": "exclusive_write", 00:21:31.155 "zoned": false, 00:21:31.155 "supported_io_types": { 00:21:31.155 "read": true, 00:21:31.155 "write": true, 00:21:31.155 "unmap": true, 00:21:31.155 "flush": true, 00:21:31.155 "reset": true, 00:21:31.155 "nvme_admin": false, 00:21:31.155 "nvme_io": false, 00:21:31.155 "nvme_io_md": false, 00:21:31.155 "write_zeroes": true, 00:21:31.155 "zcopy": true, 00:21:31.155 "get_zone_info": false, 00:21:31.155 "zone_management": false, 00:21:31.155 "zone_append": false, 00:21:31.155 "compare": false, 00:21:31.155 "compare_and_write": false, 00:21:31.155 "abort": true, 00:21:31.155 "seek_hole": false, 00:21:31.155 "seek_data": false, 00:21:31.155 "copy": true, 00:21:31.155 "nvme_iov_md": false 00:21:31.155 }, 00:21:31.155 "memory_domains": [ 00:21:31.155 { 00:21:31.155 "dma_device_id": "system", 00:21:31.155 "dma_device_type": 1 00:21:31.155 }, 00:21:31.155 { 00:21:31.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.155 "dma_device_type": 2 00:21:31.155 } 00:21:31.155 ], 00:21:31.155 "driver_specific": {} 00:21:31.155 }' 00:21:31.155 18:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:31.155 18:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:31.155 18:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:31.155 18:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:31.155 18:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:31.155 18:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:31.155 18:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:31.419 18:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:31.419 18:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:31.419 18:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:31.419 18:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:31.419 18:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:31.419 18:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:31.677 [2024-07-15 18:36:17.192547] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:31.677 18:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:31.677 18:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:21:31.677 18:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:31.677 18:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:21:31.677 18:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:21:31.677 18:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:21:31.677 18:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:31.677 18:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:31.677 18:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:31.677 18:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:31.677 18:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:31.677 18:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:31.677 18:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:31.677 18:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:31.678 18:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:31.678 18:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.678 18:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:32.244 18:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:32.244 "name": "Existed_Raid", 00:21:32.244 "uuid": "cf3fc7d5-c6ba-4c48-a3f8-069378a1be10", 00:21:32.244 "strip_size_kb": 0, 00:21:32.244 "state": "online", 00:21:32.244 "raid_level": "raid1", 00:21:32.244 "superblock": true, 00:21:32.244 "num_base_bdevs": 4, 00:21:32.244 "num_base_bdevs_discovered": 3, 00:21:32.244 "num_base_bdevs_operational": 3, 00:21:32.244 "base_bdevs_list": [ 00:21:32.244 { 00:21:32.244 "name": null, 00:21:32.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.244 "is_configured": false, 00:21:32.244 "data_offset": 2048, 00:21:32.244 "data_size": 63488 00:21:32.244 }, 00:21:32.244 { 00:21:32.244 "name": "BaseBdev2", 00:21:32.244 "uuid": "454b8270-4c41-4185-8600-d6d9961ec95e", 00:21:32.244 "is_configured": true, 00:21:32.244 "data_offset": 2048, 00:21:32.244 "data_size": 63488 00:21:32.244 }, 00:21:32.244 { 00:21:32.244 "name": "BaseBdev3", 00:21:32.244 "uuid": "c3c12465-0ac9-4c1a-90ec-f1cf8698cc80", 00:21:32.244 "is_configured": true, 00:21:32.244 "data_offset": 2048, 00:21:32.244 "data_size": 63488 00:21:32.244 }, 00:21:32.244 { 00:21:32.244 "name": "BaseBdev4", 00:21:32.244 "uuid": "409cf049-c916-413d-a9ab-11c8b134fedd", 00:21:32.244 "is_configured": true, 00:21:32.244 "data_offset": 2048, 00:21:32.244 "data_size": 63488 00:21:32.244 } 00:21:32.244 ] 00:21:32.244 }' 00:21:32.244 18:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:32.244 18:36:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.809 18:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:32.809 18:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:32.809 18:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.809 18:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:33.068 18:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:33.068 18:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:33.068 18:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:33.326 [2024-07-15 18:36:18.713776] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:33.326 18:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:33.326 18:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:33.326 18:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.326 18:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:33.584 18:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:33.584 18:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:33.584 18:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:33.842 [2024-07-15 18:36:19.302043] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:33.842 18:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:33.842 18:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:33.842 18:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.842 18:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:34.101 18:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:34.101 18:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:34.101 18:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:34.360 [2024-07-15 18:36:19.890142] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:34.360 [2024-07-15 18:36:19.890220] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:34.360 [2024-07-15 18:36:19.900831] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:34.360 [2024-07-15 18:36:19.900865] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:34.360 [2024-07-15 18:36:19.900874] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1678490 name Existed_Raid, state offline 00:21:34.617 18:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:34.617 18:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:34.617 18:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.617 18:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:34.875 18:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:34.875 18:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:34.875 18:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:21:34.875 18:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:21:34.875 18:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:34.875 18:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:35.443 BaseBdev2 00:21:35.443 18:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:21:35.443 18:36:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:35.443 18:36:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:35.443 18:36:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:35.443 18:36:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:35.443 18:36:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:35.443 18:36:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:35.702 18:36:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:35.702 [ 00:21:35.702 { 00:21:35.702 "name": "BaseBdev2", 00:21:35.702 "aliases": [ 00:21:35.702 "b71dd510-318d-44e5-9afd-d0c4a6ceab1d" 00:21:35.702 ], 00:21:35.702 "product_name": "Malloc disk", 00:21:35.702 "block_size": 512, 00:21:35.702 "num_blocks": 65536, 00:21:35.702 "uuid": "b71dd510-318d-44e5-9afd-d0c4a6ceab1d", 00:21:35.702 "assigned_rate_limits": { 00:21:35.702 "rw_ios_per_sec": 0, 00:21:35.702 "rw_mbytes_per_sec": 0, 00:21:35.702 "r_mbytes_per_sec": 0, 00:21:35.702 "w_mbytes_per_sec": 0 00:21:35.702 }, 00:21:35.702 "claimed": false, 00:21:35.702 "zoned": false, 00:21:35.702 "supported_io_types": { 00:21:35.702 "read": true, 00:21:35.702 "write": true, 00:21:35.702 "unmap": true, 00:21:35.702 "flush": true, 00:21:35.702 "reset": true, 00:21:35.702 "nvme_admin": false, 00:21:35.702 "nvme_io": false, 00:21:35.702 "nvme_io_md": false, 00:21:35.702 "write_zeroes": true, 00:21:35.702 "zcopy": true, 00:21:35.702 "get_zone_info": false, 00:21:35.702 "zone_management": false, 00:21:35.702 "zone_append": false, 00:21:35.702 "compare": false, 00:21:35.702 "compare_and_write": false, 00:21:35.702 "abort": true, 00:21:35.702 "seek_hole": false, 00:21:35.702 "seek_data": false, 00:21:35.702 "copy": true, 00:21:35.702 "nvme_iov_md": false 00:21:35.702 }, 00:21:35.702 "memory_domains": [ 00:21:35.702 { 00:21:35.702 "dma_device_id": "system", 00:21:35.702 "dma_device_type": 1 00:21:35.702 }, 00:21:35.702 { 00:21:35.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.702 "dma_device_type": 2 00:21:35.702 } 00:21:35.702 ], 00:21:35.702 "driver_specific": {} 00:21:35.702 } 00:21:35.702 ] 00:21:35.702 18:36:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:35.702 18:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:35.702 18:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:35.702 18:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:35.961 BaseBdev3 00:21:35.961 18:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:21:35.961 18:36:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:35.961 18:36:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:35.961 18:36:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:35.961 18:36:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:35.961 18:36:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:35.961 18:36:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:36.220 18:36:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:36.478 [ 00:21:36.478 { 00:21:36.478 "name": "BaseBdev3", 00:21:36.478 "aliases": [ 00:21:36.478 "1ba2946a-88c0-46bb-b52f-709ffb011157" 00:21:36.478 ], 00:21:36.478 "product_name": "Malloc disk", 00:21:36.478 "block_size": 512, 00:21:36.478 "num_blocks": 65536, 00:21:36.478 "uuid": "1ba2946a-88c0-46bb-b52f-709ffb011157", 00:21:36.478 "assigned_rate_limits": { 00:21:36.478 "rw_ios_per_sec": 0, 00:21:36.478 "rw_mbytes_per_sec": 0, 00:21:36.478 "r_mbytes_per_sec": 0, 00:21:36.478 "w_mbytes_per_sec": 0 00:21:36.478 }, 00:21:36.478 "claimed": false, 00:21:36.478 "zoned": false, 00:21:36.478 "supported_io_types": { 00:21:36.478 "read": true, 00:21:36.478 "write": true, 00:21:36.478 "unmap": true, 00:21:36.478 "flush": true, 00:21:36.478 "reset": true, 00:21:36.478 "nvme_admin": false, 00:21:36.478 "nvme_io": false, 00:21:36.478 "nvme_io_md": false, 00:21:36.478 "write_zeroes": true, 00:21:36.478 "zcopy": true, 00:21:36.478 "get_zone_info": false, 00:21:36.478 "zone_management": false, 00:21:36.478 "zone_append": false, 00:21:36.478 "compare": false, 00:21:36.478 "compare_and_write": false, 00:21:36.478 "abort": true, 00:21:36.478 "seek_hole": false, 00:21:36.478 "seek_data": false, 00:21:36.478 "copy": true, 00:21:36.478 "nvme_iov_md": false 00:21:36.478 }, 00:21:36.478 "memory_domains": [ 00:21:36.478 { 00:21:36.478 "dma_device_id": "system", 00:21:36.478 "dma_device_type": 1 00:21:36.478 }, 00:21:36.478 { 00:21:36.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:36.478 "dma_device_type": 2 00:21:36.478 } 00:21:36.478 ], 00:21:36.478 "driver_specific": {} 00:21:36.478 } 00:21:36.478 ] 00:21:36.478 18:36:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:36.478 18:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:36.478 18:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:36.478 18:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:36.737 BaseBdev4 00:21:36.737 18:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:21:36.737 18:36:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:21:36.737 18:36:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:36.737 18:36:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:36.737 18:36:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:36.737 18:36:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:36.737 18:36:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:36.995 18:36:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:37.255 [ 00:21:37.255 { 00:21:37.255 "name": "BaseBdev4", 00:21:37.255 "aliases": [ 00:21:37.255 "5374c792-c265-48b4-b2d2-b9b561113857" 00:21:37.255 ], 00:21:37.255 "product_name": "Malloc disk", 00:21:37.255 "block_size": 512, 00:21:37.255 "num_blocks": 65536, 00:21:37.255 "uuid": "5374c792-c265-48b4-b2d2-b9b561113857", 00:21:37.255 "assigned_rate_limits": { 00:21:37.255 "rw_ios_per_sec": 0, 00:21:37.255 "rw_mbytes_per_sec": 0, 00:21:37.255 "r_mbytes_per_sec": 0, 00:21:37.255 "w_mbytes_per_sec": 0 00:21:37.255 }, 00:21:37.255 "claimed": false, 00:21:37.255 "zoned": false, 00:21:37.255 "supported_io_types": { 00:21:37.255 "read": true, 00:21:37.255 "write": true, 00:21:37.255 "unmap": true, 00:21:37.255 "flush": true, 00:21:37.255 "reset": true, 00:21:37.255 "nvme_admin": false, 00:21:37.255 "nvme_io": false, 00:21:37.255 "nvme_io_md": false, 00:21:37.255 "write_zeroes": true, 00:21:37.255 "zcopy": true, 00:21:37.255 "get_zone_info": false, 00:21:37.255 "zone_management": false, 00:21:37.255 "zone_append": false, 00:21:37.255 "compare": false, 00:21:37.255 "compare_and_write": false, 00:21:37.255 "abort": true, 00:21:37.255 "seek_hole": false, 00:21:37.255 "seek_data": false, 00:21:37.255 "copy": true, 00:21:37.255 "nvme_iov_md": false 00:21:37.255 }, 00:21:37.255 "memory_domains": [ 00:21:37.255 { 00:21:37.255 "dma_device_id": "system", 00:21:37.255 "dma_device_type": 1 00:21:37.255 }, 00:21:37.255 { 00:21:37.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:37.255 "dma_device_type": 2 00:21:37.255 } 00:21:37.255 ], 00:21:37.255 "driver_specific": {} 00:21:37.255 } 00:21:37.255 ] 00:21:37.255 18:36:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:37.255 18:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:37.255 18:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:37.255 18:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:37.514 [2024-07-15 18:36:22.998298] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:37.514 [2024-07-15 18:36:22.998336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:37.514 [2024-07-15 18:36:22.998354] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:37.514 [2024-07-15 18:36:22.999760] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:37.514 [2024-07-15 18:36:22.999803] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:37.514 18:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:37.514 18:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:37.514 18:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:37.514 18:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:37.514 18:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:37.514 18:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:37.514 18:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:37.514 18:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:37.514 18:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:37.514 18:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:37.514 18:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.514 18:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:38.080 18:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:38.080 "name": "Existed_Raid", 00:21:38.080 "uuid": "227540e5-f2db-42a3-8b0c-7645b4cfef03", 00:21:38.080 "strip_size_kb": 0, 00:21:38.080 "state": "configuring", 00:21:38.080 "raid_level": "raid1", 00:21:38.080 "superblock": true, 00:21:38.080 "num_base_bdevs": 4, 00:21:38.080 "num_base_bdevs_discovered": 3, 00:21:38.080 "num_base_bdevs_operational": 4, 00:21:38.080 "base_bdevs_list": [ 00:21:38.080 { 00:21:38.080 "name": "BaseBdev1", 00:21:38.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.080 "is_configured": false, 00:21:38.080 "data_offset": 0, 00:21:38.080 "data_size": 0 00:21:38.080 }, 00:21:38.080 { 00:21:38.080 "name": "BaseBdev2", 00:21:38.080 "uuid": "b71dd510-318d-44e5-9afd-d0c4a6ceab1d", 00:21:38.080 "is_configured": true, 00:21:38.080 "data_offset": 2048, 00:21:38.080 "data_size": 63488 00:21:38.080 }, 00:21:38.080 { 00:21:38.080 "name": "BaseBdev3", 00:21:38.080 "uuid": "1ba2946a-88c0-46bb-b52f-709ffb011157", 00:21:38.080 "is_configured": true, 00:21:38.080 "data_offset": 2048, 00:21:38.080 "data_size": 63488 00:21:38.080 }, 00:21:38.080 { 00:21:38.080 "name": "BaseBdev4", 00:21:38.080 "uuid": "5374c792-c265-48b4-b2d2-b9b561113857", 00:21:38.081 "is_configured": true, 00:21:38.081 "data_offset": 2048, 00:21:38.081 "data_size": 63488 00:21:38.081 } 00:21:38.081 ] 00:21:38.081 }' 00:21:38.081 18:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:38.081 18:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.647 18:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:38.905 [2024-07-15 18:36:24.285906] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:38.905 18:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:38.905 18:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:38.905 18:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:38.905 18:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:38.905 18:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:38.905 18:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:38.905 18:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:38.905 18:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:38.905 18:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:38.905 18:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:38.905 18:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.905 18:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:39.164 18:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:39.164 "name": "Existed_Raid", 00:21:39.164 "uuid": "227540e5-f2db-42a3-8b0c-7645b4cfef03", 00:21:39.164 "strip_size_kb": 0, 00:21:39.164 "state": "configuring", 00:21:39.164 "raid_level": "raid1", 00:21:39.164 "superblock": true, 00:21:39.164 "num_base_bdevs": 4, 00:21:39.164 "num_base_bdevs_discovered": 2, 00:21:39.164 "num_base_bdevs_operational": 4, 00:21:39.164 "base_bdevs_list": [ 00:21:39.164 { 00:21:39.164 "name": "BaseBdev1", 00:21:39.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.164 "is_configured": false, 00:21:39.164 "data_offset": 0, 00:21:39.164 "data_size": 0 00:21:39.164 }, 00:21:39.164 { 00:21:39.164 "name": null, 00:21:39.164 "uuid": "b71dd510-318d-44e5-9afd-d0c4a6ceab1d", 00:21:39.164 "is_configured": false, 00:21:39.164 "data_offset": 2048, 00:21:39.164 "data_size": 63488 00:21:39.164 }, 00:21:39.164 { 00:21:39.164 "name": "BaseBdev3", 00:21:39.164 "uuid": "1ba2946a-88c0-46bb-b52f-709ffb011157", 00:21:39.164 "is_configured": true, 00:21:39.164 "data_offset": 2048, 00:21:39.164 "data_size": 63488 00:21:39.164 }, 00:21:39.164 { 00:21:39.164 "name": "BaseBdev4", 00:21:39.164 "uuid": "5374c792-c265-48b4-b2d2-b9b561113857", 00:21:39.164 "is_configured": true, 00:21:39.164 "data_offset": 2048, 00:21:39.164 "data_size": 63488 00:21:39.164 } 00:21:39.164 ] 00:21:39.164 }' 00:21:39.164 18:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:39.164 18:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.098 18:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.098 18:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:40.098 18:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:21:40.098 18:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:40.356 [2024-07-15 18:36:25.789248] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:40.356 BaseBdev1 00:21:40.356 18:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:21:40.356 18:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:40.356 18:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:40.356 18:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:40.356 18:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:40.356 18:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:40.356 18:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:40.614 18:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:40.873 [ 00:21:40.873 { 00:21:40.873 "name": "BaseBdev1", 00:21:40.873 "aliases": [ 00:21:40.873 "0612f066-5fb3-43d5-9e20-82abc7b78454" 00:21:40.873 ], 00:21:40.873 "product_name": "Malloc disk", 00:21:40.873 "block_size": 512, 00:21:40.873 "num_blocks": 65536, 00:21:40.873 "uuid": "0612f066-5fb3-43d5-9e20-82abc7b78454", 00:21:40.873 "assigned_rate_limits": { 00:21:40.873 "rw_ios_per_sec": 0, 00:21:40.873 "rw_mbytes_per_sec": 0, 00:21:40.873 "r_mbytes_per_sec": 0, 00:21:40.873 "w_mbytes_per_sec": 0 00:21:40.873 }, 00:21:40.873 "claimed": true, 00:21:40.873 "claim_type": "exclusive_write", 00:21:40.873 "zoned": false, 00:21:40.873 "supported_io_types": { 00:21:40.873 "read": true, 00:21:40.873 "write": true, 00:21:40.873 "unmap": true, 00:21:40.873 "flush": true, 00:21:40.873 "reset": true, 00:21:40.873 "nvme_admin": false, 00:21:40.873 "nvme_io": false, 00:21:40.873 "nvme_io_md": false, 00:21:40.873 "write_zeroes": true, 00:21:40.873 "zcopy": true, 00:21:40.873 "get_zone_info": false, 00:21:40.873 "zone_management": false, 00:21:40.873 "zone_append": false, 00:21:40.873 "compare": false, 00:21:40.873 "compare_and_write": false, 00:21:40.873 "abort": true, 00:21:40.873 "seek_hole": false, 00:21:40.873 "seek_data": false, 00:21:40.873 "copy": true, 00:21:40.873 "nvme_iov_md": false 00:21:40.873 }, 00:21:40.873 "memory_domains": [ 00:21:40.873 { 00:21:40.873 "dma_device_id": "system", 00:21:40.873 "dma_device_type": 1 00:21:40.873 }, 00:21:40.873 { 00:21:40.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.873 "dma_device_type": 2 00:21:40.873 } 00:21:40.873 ], 00:21:40.873 "driver_specific": {} 00:21:40.873 } 00:21:40.873 ] 00:21:40.873 18:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:40.873 18:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:40.873 18:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:40.873 18:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:40.873 18:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:40.873 18:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:40.873 18:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:40.873 18:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:40.873 18:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:40.873 18:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:40.873 18:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:40.873 18:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.873 18:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:41.441 18:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:41.441 "name": "Existed_Raid", 00:21:41.441 "uuid": "227540e5-f2db-42a3-8b0c-7645b4cfef03", 00:21:41.441 "strip_size_kb": 0, 00:21:41.441 "state": "configuring", 00:21:41.441 "raid_level": "raid1", 00:21:41.441 "superblock": true, 00:21:41.441 "num_base_bdevs": 4, 00:21:41.441 "num_base_bdevs_discovered": 3, 00:21:41.441 "num_base_bdevs_operational": 4, 00:21:41.441 "base_bdevs_list": [ 00:21:41.441 { 00:21:41.441 "name": "BaseBdev1", 00:21:41.441 "uuid": "0612f066-5fb3-43d5-9e20-82abc7b78454", 00:21:41.441 "is_configured": true, 00:21:41.441 "data_offset": 2048, 00:21:41.441 "data_size": 63488 00:21:41.441 }, 00:21:41.441 { 00:21:41.441 "name": null, 00:21:41.441 "uuid": "b71dd510-318d-44e5-9afd-d0c4a6ceab1d", 00:21:41.441 "is_configured": false, 00:21:41.441 "data_offset": 2048, 00:21:41.441 "data_size": 63488 00:21:41.441 }, 00:21:41.441 { 00:21:41.441 "name": "BaseBdev3", 00:21:41.441 "uuid": "1ba2946a-88c0-46bb-b52f-709ffb011157", 00:21:41.441 "is_configured": true, 00:21:41.442 "data_offset": 2048, 00:21:41.442 "data_size": 63488 00:21:41.442 }, 00:21:41.442 { 00:21:41.442 "name": "BaseBdev4", 00:21:41.442 "uuid": "5374c792-c265-48b4-b2d2-b9b561113857", 00:21:41.442 "is_configured": true, 00:21:41.442 "data_offset": 2048, 00:21:41.442 "data_size": 63488 00:21:41.442 } 00:21:41.442 ] 00:21:41.442 }' 00:21:41.442 18:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:41.442 18:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.007 18:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.007 18:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:42.264 18:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:21:42.264 18:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:21:42.522 [2024-07-15 18:36:27.987207] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:42.522 18:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:42.522 18:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:42.522 18:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:42.522 18:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:42.522 18:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:42.522 18:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:42.522 18:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:42.522 18:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:42.522 18:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:42.522 18:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:42.522 18:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.522 18:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:43.088 18:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:43.088 "name": "Existed_Raid", 00:21:43.088 "uuid": "227540e5-f2db-42a3-8b0c-7645b4cfef03", 00:21:43.088 "strip_size_kb": 0, 00:21:43.088 "state": "configuring", 00:21:43.088 "raid_level": "raid1", 00:21:43.088 "superblock": true, 00:21:43.088 "num_base_bdevs": 4, 00:21:43.088 "num_base_bdevs_discovered": 2, 00:21:43.088 "num_base_bdevs_operational": 4, 00:21:43.088 "base_bdevs_list": [ 00:21:43.088 { 00:21:43.088 "name": "BaseBdev1", 00:21:43.088 "uuid": "0612f066-5fb3-43d5-9e20-82abc7b78454", 00:21:43.088 "is_configured": true, 00:21:43.088 "data_offset": 2048, 00:21:43.088 "data_size": 63488 00:21:43.088 }, 00:21:43.088 { 00:21:43.088 "name": null, 00:21:43.088 "uuid": "b71dd510-318d-44e5-9afd-d0c4a6ceab1d", 00:21:43.088 "is_configured": false, 00:21:43.088 "data_offset": 2048, 00:21:43.088 "data_size": 63488 00:21:43.088 }, 00:21:43.088 { 00:21:43.088 "name": null, 00:21:43.088 "uuid": "1ba2946a-88c0-46bb-b52f-709ffb011157", 00:21:43.088 "is_configured": false, 00:21:43.088 "data_offset": 2048, 00:21:43.088 "data_size": 63488 00:21:43.088 }, 00:21:43.088 { 00:21:43.088 "name": "BaseBdev4", 00:21:43.088 "uuid": "5374c792-c265-48b4-b2d2-b9b561113857", 00:21:43.088 "is_configured": true, 00:21:43.088 "data_offset": 2048, 00:21:43.088 "data_size": 63488 00:21:43.088 } 00:21:43.088 ] 00:21:43.088 }' 00:21:43.088 18:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:43.088 18:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.021 18:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.021 18:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:44.310 18:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:21:44.310 18:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:44.567 [2024-07-15 18:36:30.100906] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:44.824 18:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:44.824 18:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:44.824 18:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:44.824 18:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:44.824 18:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:44.824 18:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:44.824 18:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:44.824 18:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:44.824 18:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:44.824 18:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:44.824 18:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.824 18:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:45.389 18:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:45.389 "name": "Existed_Raid", 00:21:45.389 "uuid": "227540e5-f2db-42a3-8b0c-7645b4cfef03", 00:21:45.389 "strip_size_kb": 0, 00:21:45.389 "state": "configuring", 00:21:45.389 "raid_level": "raid1", 00:21:45.389 "superblock": true, 00:21:45.389 "num_base_bdevs": 4, 00:21:45.389 "num_base_bdevs_discovered": 3, 00:21:45.389 "num_base_bdevs_operational": 4, 00:21:45.389 "base_bdevs_list": [ 00:21:45.389 { 00:21:45.389 "name": "BaseBdev1", 00:21:45.389 "uuid": "0612f066-5fb3-43d5-9e20-82abc7b78454", 00:21:45.389 "is_configured": true, 00:21:45.389 "data_offset": 2048, 00:21:45.389 "data_size": 63488 00:21:45.389 }, 00:21:45.389 { 00:21:45.389 "name": null, 00:21:45.389 "uuid": "b71dd510-318d-44e5-9afd-d0c4a6ceab1d", 00:21:45.389 "is_configured": false, 00:21:45.389 "data_offset": 2048, 00:21:45.389 "data_size": 63488 00:21:45.389 }, 00:21:45.389 { 00:21:45.389 "name": "BaseBdev3", 00:21:45.389 "uuid": "1ba2946a-88c0-46bb-b52f-709ffb011157", 00:21:45.389 "is_configured": true, 00:21:45.389 "data_offset": 2048, 00:21:45.389 "data_size": 63488 00:21:45.389 }, 00:21:45.389 { 00:21:45.389 "name": "BaseBdev4", 00:21:45.389 "uuid": "5374c792-c265-48b4-b2d2-b9b561113857", 00:21:45.389 "is_configured": true, 00:21:45.389 "data_offset": 2048, 00:21:45.389 "data_size": 63488 00:21:45.389 } 00:21:45.389 ] 00:21:45.389 }' 00:21:45.389 18:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:45.389 18:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:45.647 18:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.647 18:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:45.905 18:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:21:45.905 18:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:46.470 [2024-07-15 18:36:31.901777] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:46.470 18:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:46.470 18:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:46.470 18:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:46.470 18:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:46.470 18:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:46.470 18:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:46.470 18:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:46.470 18:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:46.470 18:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:46.470 18:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:46.470 18:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.470 18:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:46.727 18:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:46.727 "name": "Existed_Raid", 00:21:46.727 "uuid": "227540e5-f2db-42a3-8b0c-7645b4cfef03", 00:21:46.727 "strip_size_kb": 0, 00:21:46.727 "state": "configuring", 00:21:46.727 "raid_level": "raid1", 00:21:46.727 "superblock": true, 00:21:46.727 "num_base_bdevs": 4, 00:21:46.727 "num_base_bdevs_discovered": 2, 00:21:46.727 "num_base_bdevs_operational": 4, 00:21:46.727 "base_bdevs_list": [ 00:21:46.727 { 00:21:46.727 "name": null, 00:21:46.727 "uuid": "0612f066-5fb3-43d5-9e20-82abc7b78454", 00:21:46.727 "is_configured": false, 00:21:46.727 "data_offset": 2048, 00:21:46.727 "data_size": 63488 00:21:46.727 }, 00:21:46.727 { 00:21:46.727 "name": null, 00:21:46.727 "uuid": "b71dd510-318d-44e5-9afd-d0c4a6ceab1d", 00:21:46.727 "is_configured": false, 00:21:46.727 "data_offset": 2048, 00:21:46.727 "data_size": 63488 00:21:46.727 }, 00:21:46.727 { 00:21:46.727 "name": "BaseBdev3", 00:21:46.727 "uuid": "1ba2946a-88c0-46bb-b52f-709ffb011157", 00:21:46.727 "is_configured": true, 00:21:46.727 "data_offset": 2048, 00:21:46.727 "data_size": 63488 00:21:46.727 }, 00:21:46.727 { 00:21:46.727 "name": "BaseBdev4", 00:21:46.727 "uuid": "5374c792-c265-48b4-b2d2-b9b561113857", 00:21:46.727 "is_configured": true, 00:21:46.727 "data_offset": 2048, 00:21:46.727 "data_size": 63488 00:21:46.727 } 00:21:46.727 ] 00:21:46.727 }' 00:21:46.727 18:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:46.727 18:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:47.292 18:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.292 18:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:47.550 18:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:21:47.550 18:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:47.808 [2024-07-15 18:36:33.291970] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:47.808 18:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:47.808 18:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:47.808 18:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:47.808 18:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:47.808 18:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:47.808 18:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:47.808 18:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:47.808 18:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:47.808 18:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:47.808 18:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:47.808 18:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.808 18:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:48.066 18:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:48.066 "name": "Existed_Raid", 00:21:48.066 "uuid": "227540e5-f2db-42a3-8b0c-7645b4cfef03", 00:21:48.066 "strip_size_kb": 0, 00:21:48.066 "state": "configuring", 00:21:48.066 "raid_level": "raid1", 00:21:48.066 "superblock": true, 00:21:48.066 "num_base_bdevs": 4, 00:21:48.066 "num_base_bdevs_discovered": 3, 00:21:48.066 "num_base_bdevs_operational": 4, 00:21:48.066 "base_bdevs_list": [ 00:21:48.066 { 00:21:48.066 "name": null, 00:21:48.066 "uuid": "0612f066-5fb3-43d5-9e20-82abc7b78454", 00:21:48.066 "is_configured": false, 00:21:48.066 "data_offset": 2048, 00:21:48.066 "data_size": 63488 00:21:48.066 }, 00:21:48.066 { 00:21:48.066 "name": "BaseBdev2", 00:21:48.066 "uuid": "b71dd510-318d-44e5-9afd-d0c4a6ceab1d", 00:21:48.066 "is_configured": true, 00:21:48.066 "data_offset": 2048, 00:21:48.066 "data_size": 63488 00:21:48.066 }, 00:21:48.066 { 00:21:48.066 "name": "BaseBdev3", 00:21:48.066 "uuid": "1ba2946a-88c0-46bb-b52f-709ffb011157", 00:21:48.066 "is_configured": true, 00:21:48.066 "data_offset": 2048, 00:21:48.066 "data_size": 63488 00:21:48.066 }, 00:21:48.066 { 00:21:48.066 "name": "BaseBdev4", 00:21:48.066 "uuid": "5374c792-c265-48b4-b2d2-b9b561113857", 00:21:48.066 "is_configured": true, 00:21:48.066 "data_offset": 2048, 00:21:48.066 "data_size": 63488 00:21:48.066 } 00:21:48.066 ] 00:21:48.066 }' 00:21:48.066 18:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:48.066 18:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.631 18:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.631 18:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:48.889 18:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:21:48.889 18:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.889 18:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:49.145 18:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 0612f066-5fb3-43d5-9e20-82abc7b78454 00:21:49.403 [2024-07-15 18:36:34.855555] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:49.403 [2024-07-15 18:36:34.855719] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1679730 00:21:49.403 [2024-07-15 18:36:34.855731] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:49.403 [2024-07-15 18:36:34.855923] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16642d0 00:21:49.403 [2024-07-15 18:36:34.856067] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1679730 00:21:49.403 [2024-07-15 18:36:34.856076] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1679730 00:21:49.403 [2024-07-15 18:36:34.856172] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:49.403 NewBaseBdev 00:21:49.403 18:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:21:49.403 18:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:21:49.403 18:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:49.403 18:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:49.403 18:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:49.403 18:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:49.403 18:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:49.660 18:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:49.918 [ 00:21:49.918 { 00:21:49.918 "name": "NewBaseBdev", 00:21:49.918 "aliases": [ 00:21:49.918 "0612f066-5fb3-43d5-9e20-82abc7b78454" 00:21:49.918 ], 00:21:49.918 "product_name": "Malloc disk", 00:21:49.918 "block_size": 512, 00:21:49.918 "num_blocks": 65536, 00:21:49.918 "uuid": "0612f066-5fb3-43d5-9e20-82abc7b78454", 00:21:49.918 "assigned_rate_limits": { 00:21:49.918 "rw_ios_per_sec": 0, 00:21:49.918 "rw_mbytes_per_sec": 0, 00:21:49.918 "r_mbytes_per_sec": 0, 00:21:49.918 "w_mbytes_per_sec": 0 00:21:49.918 }, 00:21:49.918 "claimed": true, 00:21:49.918 "claim_type": "exclusive_write", 00:21:49.918 "zoned": false, 00:21:49.918 "supported_io_types": { 00:21:49.918 "read": true, 00:21:49.918 "write": true, 00:21:49.918 "unmap": true, 00:21:49.918 "flush": true, 00:21:49.918 "reset": true, 00:21:49.918 "nvme_admin": false, 00:21:49.918 "nvme_io": false, 00:21:49.918 "nvme_io_md": false, 00:21:49.918 "write_zeroes": true, 00:21:49.918 "zcopy": true, 00:21:49.918 "get_zone_info": false, 00:21:49.918 "zone_management": false, 00:21:49.918 "zone_append": false, 00:21:49.918 "compare": false, 00:21:49.918 "compare_and_write": false, 00:21:49.918 "abort": true, 00:21:49.918 "seek_hole": false, 00:21:49.918 "seek_data": false, 00:21:49.918 "copy": true, 00:21:49.918 "nvme_iov_md": false 00:21:49.918 }, 00:21:49.918 "memory_domains": [ 00:21:49.918 { 00:21:49.918 "dma_device_id": "system", 00:21:49.918 "dma_device_type": 1 00:21:49.918 }, 00:21:49.918 { 00:21:49.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.918 "dma_device_type": 2 00:21:49.918 } 00:21:49.918 ], 00:21:49.918 "driver_specific": {} 00:21:49.918 } 00:21:49.918 ] 00:21:49.918 18:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:49.918 18:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:21:49.918 18:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:49.918 18:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:49.918 18:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:49.918 18:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:49.918 18:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:49.918 18:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:49.918 18:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:49.918 18:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:49.918 18:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:49.918 18:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.918 18:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:50.176 18:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:50.176 "name": "Existed_Raid", 00:21:50.176 "uuid": "227540e5-f2db-42a3-8b0c-7645b4cfef03", 00:21:50.176 "strip_size_kb": 0, 00:21:50.176 "state": "online", 00:21:50.176 "raid_level": "raid1", 00:21:50.176 "superblock": true, 00:21:50.176 "num_base_bdevs": 4, 00:21:50.176 "num_base_bdevs_discovered": 4, 00:21:50.176 "num_base_bdevs_operational": 4, 00:21:50.176 "base_bdevs_list": [ 00:21:50.176 { 00:21:50.176 "name": "NewBaseBdev", 00:21:50.176 "uuid": "0612f066-5fb3-43d5-9e20-82abc7b78454", 00:21:50.176 "is_configured": true, 00:21:50.176 "data_offset": 2048, 00:21:50.176 "data_size": 63488 00:21:50.176 }, 00:21:50.176 { 00:21:50.176 "name": "BaseBdev2", 00:21:50.176 "uuid": "b71dd510-318d-44e5-9afd-d0c4a6ceab1d", 00:21:50.176 "is_configured": true, 00:21:50.176 "data_offset": 2048, 00:21:50.176 "data_size": 63488 00:21:50.176 }, 00:21:50.176 { 00:21:50.176 "name": "BaseBdev3", 00:21:50.176 "uuid": "1ba2946a-88c0-46bb-b52f-709ffb011157", 00:21:50.176 "is_configured": true, 00:21:50.176 "data_offset": 2048, 00:21:50.176 "data_size": 63488 00:21:50.176 }, 00:21:50.176 { 00:21:50.176 "name": "BaseBdev4", 00:21:50.176 "uuid": "5374c792-c265-48b4-b2d2-b9b561113857", 00:21:50.176 "is_configured": true, 00:21:50.176 "data_offset": 2048, 00:21:50.176 "data_size": 63488 00:21:50.176 } 00:21:50.176 ] 00:21:50.176 }' 00:21:50.176 18:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:50.176 18:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:50.742 18:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:21:50.742 18:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:50.742 18:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:50.742 18:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:50.742 18:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:50.742 18:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:21:50.742 18:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:50.742 18:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:51.000 [2024-07-15 18:36:36.500364] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:51.000 18:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:51.000 "name": "Existed_Raid", 00:21:51.000 "aliases": [ 00:21:51.000 "227540e5-f2db-42a3-8b0c-7645b4cfef03" 00:21:51.000 ], 00:21:51.000 "product_name": "Raid Volume", 00:21:51.000 "block_size": 512, 00:21:51.000 "num_blocks": 63488, 00:21:51.000 "uuid": "227540e5-f2db-42a3-8b0c-7645b4cfef03", 00:21:51.000 "assigned_rate_limits": { 00:21:51.000 "rw_ios_per_sec": 0, 00:21:51.000 "rw_mbytes_per_sec": 0, 00:21:51.000 "r_mbytes_per_sec": 0, 00:21:51.000 "w_mbytes_per_sec": 0 00:21:51.000 }, 00:21:51.000 "claimed": false, 00:21:51.000 "zoned": false, 00:21:51.000 "supported_io_types": { 00:21:51.000 "read": true, 00:21:51.000 "write": true, 00:21:51.000 "unmap": false, 00:21:51.000 "flush": false, 00:21:51.000 "reset": true, 00:21:51.000 "nvme_admin": false, 00:21:51.000 "nvme_io": false, 00:21:51.000 "nvme_io_md": false, 00:21:51.000 "write_zeroes": true, 00:21:51.000 "zcopy": false, 00:21:51.000 "get_zone_info": false, 00:21:51.000 "zone_management": false, 00:21:51.000 "zone_append": false, 00:21:51.000 "compare": false, 00:21:51.000 "compare_and_write": false, 00:21:51.000 "abort": false, 00:21:51.000 "seek_hole": false, 00:21:51.000 "seek_data": false, 00:21:51.000 "copy": false, 00:21:51.000 "nvme_iov_md": false 00:21:51.000 }, 00:21:51.000 "memory_domains": [ 00:21:51.000 { 00:21:51.000 "dma_device_id": "system", 00:21:51.000 "dma_device_type": 1 00:21:51.000 }, 00:21:51.000 { 00:21:51.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:51.000 "dma_device_type": 2 00:21:51.000 }, 00:21:51.000 { 00:21:51.000 "dma_device_id": "system", 00:21:51.000 "dma_device_type": 1 00:21:51.000 }, 00:21:51.000 { 00:21:51.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:51.000 "dma_device_type": 2 00:21:51.000 }, 00:21:51.000 { 00:21:51.000 "dma_device_id": "system", 00:21:51.000 "dma_device_type": 1 00:21:51.000 }, 00:21:51.000 { 00:21:51.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:51.000 "dma_device_type": 2 00:21:51.000 }, 00:21:51.000 { 00:21:51.000 "dma_device_id": "system", 00:21:51.000 "dma_device_type": 1 00:21:51.000 }, 00:21:51.000 { 00:21:51.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:51.000 "dma_device_type": 2 00:21:51.000 } 00:21:51.000 ], 00:21:51.000 "driver_specific": { 00:21:51.000 "raid": { 00:21:51.000 "uuid": "227540e5-f2db-42a3-8b0c-7645b4cfef03", 00:21:51.000 "strip_size_kb": 0, 00:21:51.000 "state": "online", 00:21:51.000 "raid_level": "raid1", 00:21:51.000 "superblock": true, 00:21:51.000 "num_base_bdevs": 4, 00:21:51.000 "num_base_bdevs_discovered": 4, 00:21:51.000 "num_base_bdevs_operational": 4, 00:21:51.000 "base_bdevs_list": [ 00:21:51.000 { 00:21:51.000 "name": "NewBaseBdev", 00:21:51.000 "uuid": "0612f066-5fb3-43d5-9e20-82abc7b78454", 00:21:51.000 "is_configured": true, 00:21:51.000 "data_offset": 2048, 00:21:51.000 "data_size": 63488 00:21:51.000 }, 00:21:51.000 { 00:21:51.000 "name": "BaseBdev2", 00:21:51.000 "uuid": "b71dd510-318d-44e5-9afd-d0c4a6ceab1d", 00:21:51.000 "is_configured": true, 00:21:51.000 "data_offset": 2048, 00:21:51.000 "data_size": 63488 00:21:51.000 }, 00:21:51.000 { 00:21:51.000 "name": "BaseBdev3", 00:21:51.000 "uuid": "1ba2946a-88c0-46bb-b52f-709ffb011157", 00:21:51.000 "is_configured": true, 00:21:51.000 "data_offset": 2048, 00:21:51.000 "data_size": 63488 00:21:51.000 }, 00:21:51.000 { 00:21:51.000 "name": "BaseBdev4", 00:21:51.000 "uuid": "5374c792-c265-48b4-b2d2-b9b561113857", 00:21:51.000 "is_configured": true, 00:21:51.000 "data_offset": 2048, 00:21:51.000 "data_size": 63488 00:21:51.000 } 00:21:51.000 ] 00:21:51.000 } 00:21:51.000 } 00:21:51.000 }' 00:21:51.000 18:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:51.258 18:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:21:51.258 BaseBdev2 00:21:51.258 BaseBdev3 00:21:51.258 BaseBdev4' 00:21:51.258 18:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:51.258 18:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:21:51.258 18:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:51.516 18:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:51.516 "name": "NewBaseBdev", 00:21:51.516 "aliases": [ 00:21:51.516 "0612f066-5fb3-43d5-9e20-82abc7b78454" 00:21:51.516 ], 00:21:51.516 "product_name": "Malloc disk", 00:21:51.516 "block_size": 512, 00:21:51.516 "num_blocks": 65536, 00:21:51.516 "uuid": "0612f066-5fb3-43d5-9e20-82abc7b78454", 00:21:51.516 "assigned_rate_limits": { 00:21:51.516 "rw_ios_per_sec": 0, 00:21:51.516 "rw_mbytes_per_sec": 0, 00:21:51.516 "r_mbytes_per_sec": 0, 00:21:51.516 "w_mbytes_per_sec": 0 00:21:51.516 }, 00:21:51.516 "claimed": true, 00:21:51.516 "claim_type": "exclusive_write", 00:21:51.516 "zoned": false, 00:21:51.516 "supported_io_types": { 00:21:51.516 "read": true, 00:21:51.516 "write": true, 00:21:51.516 "unmap": true, 00:21:51.516 "flush": true, 00:21:51.516 "reset": true, 00:21:51.516 "nvme_admin": false, 00:21:51.516 "nvme_io": false, 00:21:51.516 "nvme_io_md": false, 00:21:51.516 "write_zeroes": true, 00:21:51.516 "zcopy": true, 00:21:51.516 "get_zone_info": false, 00:21:51.516 "zone_management": false, 00:21:51.516 "zone_append": false, 00:21:51.516 "compare": false, 00:21:51.516 "compare_and_write": false, 00:21:51.516 "abort": true, 00:21:51.516 "seek_hole": false, 00:21:51.516 "seek_data": false, 00:21:51.516 "copy": true, 00:21:51.516 "nvme_iov_md": false 00:21:51.516 }, 00:21:51.516 "memory_domains": [ 00:21:51.516 { 00:21:51.516 "dma_device_id": "system", 00:21:51.516 "dma_device_type": 1 00:21:51.516 }, 00:21:51.516 { 00:21:51.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:51.516 "dma_device_type": 2 00:21:51.516 } 00:21:51.516 ], 00:21:51.516 "driver_specific": {} 00:21:51.516 }' 00:21:51.516 18:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:51.516 18:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:51.516 18:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:51.516 18:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:51.516 18:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:51.516 18:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:51.516 18:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:51.516 18:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:51.774 18:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:51.774 18:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:51.774 18:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:51.774 18:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:51.774 18:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:51.774 18:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:51.774 18:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:52.031 18:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:52.031 "name": "BaseBdev2", 00:21:52.031 "aliases": [ 00:21:52.031 "b71dd510-318d-44e5-9afd-d0c4a6ceab1d" 00:21:52.031 ], 00:21:52.031 "product_name": "Malloc disk", 00:21:52.031 "block_size": 512, 00:21:52.031 "num_blocks": 65536, 00:21:52.031 "uuid": "b71dd510-318d-44e5-9afd-d0c4a6ceab1d", 00:21:52.031 "assigned_rate_limits": { 00:21:52.031 "rw_ios_per_sec": 0, 00:21:52.031 "rw_mbytes_per_sec": 0, 00:21:52.031 "r_mbytes_per_sec": 0, 00:21:52.031 "w_mbytes_per_sec": 0 00:21:52.031 }, 00:21:52.031 "claimed": true, 00:21:52.031 "claim_type": "exclusive_write", 00:21:52.031 "zoned": false, 00:21:52.031 "supported_io_types": { 00:21:52.031 "read": true, 00:21:52.031 "write": true, 00:21:52.031 "unmap": true, 00:21:52.031 "flush": true, 00:21:52.031 "reset": true, 00:21:52.031 "nvme_admin": false, 00:21:52.031 "nvme_io": false, 00:21:52.031 "nvme_io_md": false, 00:21:52.031 "write_zeroes": true, 00:21:52.031 "zcopy": true, 00:21:52.031 "get_zone_info": false, 00:21:52.031 "zone_management": false, 00:21:52.031 "zone_append": false, 00:21:52.031 "compare": false, 00:21:52.031 "compare_and_write": false, 00:21:52.031 "abort": true, 00:21:52.031 "seek_hole": false, 00:21:52.031 "seek_data": false, 00:21:52.031 "copy": true, 00:21:52.031 "nvme_iov_md": false 00:21:52.031 }, 00:21:52.031 "memory_domains": [ 00:21:52.031 { 00:21:52.031 "dma_device_id": "system", 00:21:52.031 "dma_device_type": 1 00:21:52.031 }, 00:21:52.031 { 00:21:52.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.031 "dma_device_type": 2 00:21:52.031 } 00:21:52.031 ], 00:21:52.031 "driver_specific": {} 00:21:52.031 }' 00:21:52.031 18:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:52.031 18:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:52.031 18:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:52.031 18:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:52.288 18:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:52.288 18:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:52.288 18:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:52.288 18:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:52.289 18:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:52.289 18:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:52.289 18:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:52.289 18:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:52.289 18:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:52.289 18:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:52.289 18:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:52.546 18:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:52.546 "name": "BaseBdev3", 00:21:52.546 "aliases": [ 00:21:52.546 "1ba2946a-88c0-46bb-b52f-709ffb011157" 00:21:52.546 ], 00:21:52.546 "product_name": "Malloc disk", 00:21:52.546 "block_size": 512, 00:21:52.546 "num_blocks": 65536, 00:21:52.546 "uuid": "1ba2946a-88c0-46bb-b52f-709ffb011157", 00:21:52.546 "assigned_rate_limits": { 00:21:52.546 "rw_ios_per_sec": 0, 00:21:52.546 "rw_mbytes_per_sec": 0, 00:21:52.546 "r_mbytes_per_sec": 0, 00:21:52.546 "w_mbytes_per_sec": 0 00:21:52.546 }, 00:21:52.546 "claimed": true, 00:21:52.546 "claim_type": "exclusive_write", 00:21:52.546 "zoned": false, 00:21:52.546 "supported_io_types": { 00:21:52.546 "read": true, 00:21:52.546 "write": true, 00:21:52.546 "unmap": true, 00:21:52.546 "flush": true, 00:21:52.546 "reset": true, 00:21:52.546 "nvme_admin": false, 00:21:52.546 "nvme_io": false, 00:21:52.546 "nvme_io_md": false, 00:21:52.546 "write_zeroes": true, 00:21:52.546 "zcopy": true, 00:21:52.546 "get_zone_info": false, 00:21:52.546 "zone_management": false, 00:21:52.546 "zone_append": false, 00:21:52.546 "compare": false, 00:21:52.546 "compare_and_write": false, 00:21:52.546 "abort": true, 00:21:52.546 "seek_hole": false, 00:21:52.546 "seek_data": false, 00:21:52.546 "copy": true, 00:21:52.546 "nvme_iov_md": false 00:21:52.546 }, 00:21:52.546 "memory_domains": [ 00:21:52.546 { 00:21:52.546 "dma_device_id": "system", 00:21:52.546 "dma_device_type": 1 00:21:52.546 }, 00:21:52.546 { 00:21:52.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.546 "dma_device_type": 2 00:21:52.546 } 00:21:52.546 ], 00:21:52.546 "driver_specific": {} 00:21:52.546 }' 00:21:52.546 18:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:52.804 18:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:52.804 18:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:52.804 18:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:52.804 18:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:52.804 18:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:52.804 18:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:52.804 18:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:53.061 18:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:53.061 18:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:53.061 18:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:53.061 18:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:53.061 18:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:53.061 18:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:53.061 18:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:53.319 18:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:53.319 "name": "BaseBdev4", 00:21:53.319 "aliases": [ 00:21:53.319 "5374c792-c265-48b4-b2d2-b9b561113857" 00:21:53.319 ], 00:21:53.319 "product_name": "Malloc disk", 00:21:53.319 "block_size": 512, 00:21:53.319 "num_blocks": 65536, 00:21:53.319 "uuid": "5374c792-c265-48b4-b2d2-b9b561113857", 00:21:53.319 "assigned_rate_limits": { 00:21:53.319 "rw_ios_per_sec": 0, 00:21:53.319 "rw_mbytes_per_sec": 0, 00:21:53.319 "r_mbytes_per_sec": 0, 00:21:53.319 "w_mbytes_per_sec": 0 00:21:53.319 }, 00:21:53.319 "claimed": true, 00:21:53.319 "claim_type": "exclusive_write", 00:21:53.319 "zoned": false, 00:21:53.319 "supported_io_types": { 00:21:53.319 "read": true, 00:21:53.319 "write": true, 00:21:53.319 "unmap": true, 00:21:53.319 "flush": true, 00:21:53.319 "reset": true, 00:21:53.319 "nvme_admin": false, 00:21:53.319 "nvme_io": false, 00:21:53.319 "nvme_io_md": false, 00:21:53.319 "write_zeroes": true, 00:21:53.319 "zcopy": true, 00:21:53.319 "get_zone_info": false, 00:21:53.319 "zone_management": false, 00:21:53.319 "zone_append": false, 00:21:53.319 "compare": false, 00:21:53.319 "compare_and_write": false, 00:21:53.319 "abort": true, 00:21:53.319 "seek_hole": false, 00:21:53.319 "seek_data": false, 00:21:53.319 "copy": true, 00:21:53.319 "nvme_iov_md": false 00:21:53.319 }, 00:21:53.319 "memory_domains": [ 00:21:53.319 { 00:21:53.319 "dma_device_id": "system", 00:21:53.319 "dma_device_type": 1 00:21:53.319 }, 00:21:53.319 { 00:21:53.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:53.319 "dma_device_type": 2 00:21:53.319 } 00:21:53.319 ], 00:21:53.319 "driver_specific": {} 00:21:53.319 }' 00:21:53.319 18:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:53.319 18:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:53.319 18:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:53.320 18:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:53.320 18:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:53.577 18:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:53.577 18:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:53.577 18:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:53.577 18:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:53.577 18:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:53.577 18:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:53.577 18:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:53.577 18:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:53.835 [2024-07-15 18:36:39.299560] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:53.835 [2024-07-15 18:36:39.299587] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:53.835 [2024-07-15 18:36:39.299637] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:53.835 [2024-07-15 18:36:39.299916] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:53.835 [2024-07-15 18:36:39.299926] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1679730 name Existed_Raid, state offline 00:21:53.835 18:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2874598 00:21:53.835 18:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2874598 ']' 00:21:53.835 18:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2874598 00:21:53.835 18:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:21:53.835 18:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:53.835 18:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2874598 00:21:53.835 18:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:53.835 18:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:53.835 18:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2874598' 00:21:53.835 killing process with pid 2874598 00:21:53.835 18:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2874598 00:21:53.835 [2024-07-15 18:36:39.362669] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:53.835 18:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2874598 00:21:54.093 [2024-07-15 18:36:39.398347] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:54.093 18:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:21:54.093 00:21:54.093 real 0m36.607s 00:21:54.093 user 1m9.291s 00:21:54.093 sys 0m4.731s 00:21:54.093 18:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:54.093 18:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:54.093 ************************************ 00:21:54.093 END TEST raid_state_function_test_sb 00:21:54.093 ************************************ 00:21:54.093 18:36:39 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:54.093 18:36:39 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:21:54.093 18:36:39 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:21:54.093 18:36:39 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:54.093 18:36:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:54.093 ************************************ 00:21:54.093 START TEST raid_superblock_test 00:21:54.093 ************************************ 00:21:54.093 18:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 4 00:21:54.093 18:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:21:54.093 18:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:21:54.093 18:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:21:54.093 18:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:21:54.093 18:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:21:54.093 18:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:21:54.093 18:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:21:54.093 18:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:21:54.093 18:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:21:54.093 18:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:21:54.093 18:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:21:54.093 18:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:21:54.093 18:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:21:54.094 18:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:21:54.094 18:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:21:54.094 18:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2881187 00:21:54.094 18:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2881187 /var/tmp/spdk-raid.sock 00:21:54.094 18:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:54.352 18:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2881187 ']' 00:21:54.352 18:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:54.352 18:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:54.352 18:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:54.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:54.352 18:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:54.352 18:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.352 [2024-07-15 18:36:39.699660] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:21:54.352 [2024-07-15 18:36:39.699725] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2881187 ] 00:21:54.352 [2024-07-15 18:36:39.799282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.352 [2024-07-15 18:36:39.894083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.610 [2024-07-15 18:36:39.954530] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:54.610 [2024-07-15 18:36:39.954563] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:55.176 18:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:55.176 18:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:21:55.176 18:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:21:55.176 18:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:55.176 18:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:21:55.176 18:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:21:55.176 18:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:55.176 18:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:55.176 18:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:55.176 18:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:55.176 18:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:21:55.434 malloc1 00:21:55.434 18:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:55.692 [2024-07-15 18:36:41.136714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:55.692 [2024-07-15 18:36:41.136756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:55.692 [2024-07-15 18:36:41.136773] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fd8e20 00:21:55.692 [2024-07-15 18:36:41.136783] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:55.692 [2024-07-15 18:36:41.138481] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:55.692 [2024-07-15 18:36:41.138509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:55.692 pt1 00:21:55.692 18:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:55.692 18:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:55.692 18:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:21:55.692 18:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:21:55.692 18:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:55.692 18:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:55.692 18:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:55.692 18:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:55.692 18:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:21:55.950 malloc2 00:21:55.950 18:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:56.207 [2024-07-15 18:36:41.654790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:56.207 [2024-07-15 18:36:41.654840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.207 [2024-07-15 18:36:41.654856] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2182ed0 00:21:56.207 [2024-07-15 18:36:41.654865] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.207 [2024-07-15 18:36:41.656483] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.207 [2024-07-15 18:36:41.656511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:56.207 pt2 00:21:56.207 18:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:56.207 18:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:56.207 18:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:21:56.207 18:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:21:56.207 18:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:56.207 18:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:56.207 18:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:56.207 18:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:56.207 18:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:21:56.466 malloc3 00:21:56.466 18:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:56.725 [2024-07-15 18:36:42.168811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:56.725 [2024-07-15 18:36:42.168858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.725 [2024-07-15 18:36:42.168874] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2186a30 00:21:56.725 [2024-07-15 18:36:42.168883] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.725 [2024-07-15 18:36:42.170489] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.725 [2024-07-15 18:36:42.170516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:56.725 pt3 00:21:56.725 18:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:56.725 18:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:56.725 18:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:21:56.725 18:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:21:56.725 18:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:21:56.725 18:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:56.725 18:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:56.725 18:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:56.725 18:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:21:56.983 malloc4 00:21:56.983 18:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:57.303 [2024-07-15 18:36:42.682844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:57.303 [2024-07-15 18:36:42.682892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:57.303 [2024-07-15 18:36:42.682907] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2183900 00:21:57.303 [2024-07-15 18:36:42.682917] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:57.303 [2024-07-15 18:36:42.684526] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:57.303 [2024-07-15 18:36:42.684553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:57.303 pt4 00:21:57.303 18:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:57.303 18:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:57.303 18:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:21:57.562 [2024-07-15 18:36:42.927513] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:57.562 [2024-07-15 18:36:42.928867] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:57.562 [2024-07-15 18:36:42.928927] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:57.562 [2024-07-15 18:36:42.928983] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:57.562 [2024-07-15 18:36:42.929163] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2186d40 00:21:57.562 [2024-07-15 18:36:42.929173] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:57.562 [2024-07-15 18:36:42.929377] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x218b140 00:21:57.562 [2024-07-15 18:36:42.929531] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2186d40 00:21:57.562 [2024-07-15 18:36:42.929540] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2186d40 00:21:57.562 [2024-07-15 18:36:42.929642] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:57.562 18:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:57.562 18:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:57.562 18:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:57.562 18:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:57.562 18:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:57.562 18:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:57.562 18:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:57.562 18:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:57.562 18:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:57.562 18:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:57.562 18:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.562 18:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.820 18:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:57.820 "name": "raid_bdev1", 00:21:57.820 "uuid": "f6aaad14-028d-45b4-888f-1fee62d201bd", 00:21:57.820 "strip_size_kb": 0, 00:21:57.820 "state": "online", 00:21:57.820 "raid_level": "raid1", 00:21:57.820 "superblock": true, 00:21:57.820 "num_base_bdevs": 4, 00:21:57.820 "num_base_bdevs_discovered": 4, 00:21:57.820 "num_base_bdevs_operational": 4, 00:21:57.820 "base_bdevs_list": [ 00:21:57.820 { 00:21:57.820 "name": "pt1", 00:21:57.820 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:57.820 "is_configured": true, 00:21:57.820 "data_offset": 2048, 00:21:57.820 "data_size": 63488 00:21:57.820 }, 00:21:57.820 { 00:21:57.820 "name": "pt2", 00:21:57.820 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:57.820 "is_configured": true, 00:21:57.820 "data_offset": 2048, 00:21:57.820 "data_size": 63488 00:21:57.820 }, 00:21:57.820 { 00:21:57.820 "name": "pt3", 00:21:57.820 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:57.820 "is_configured": true, 00:21:57.820 "data_offset": 2048, 00:21:57.820 "data_size": 63488 00:21:57.820 }, 00:21:57.820 { 00:21:57.820 "name": "pt4", 00:21:57.820 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:57.820 "is_configured": true, 00:21:57.820 "data_offset": 2048, 00:21:57.820 "data_size": 63488 00:21:57.820 } 00:21:57.820 ] 00:21:57.820 }' 00:21:57.820 18:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:57.820 18:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.385 18:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:21:58.385 18:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:58.385 18:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:58.385 18:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:58.385 18:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:58.385 18:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:58.385 18:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:58.385 18:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:58.643 [2024-07-15 18:36:44.070892] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:58.643 18:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:58.643 "name": "raid_bdev1", 00:21:58.643 "aliases": [ 00:21:58.643 "f6aaad14-028d-45b4-888f-1fee62d201bd" 00:21:58.644 ], 00:21:58.644 "product_name": "Raid Volume", 00:21:58.644 "block_size": 512, 00:21:58.644 "num_blocks": 63488, 00:21:58.644 "uuid": "f6aaad14-028d-45b4-888f-1fee62d201bd", 00:21:58.644 "assigned_rate_limits": { 00:21:58.644 "rw_ios_per_sec": 0, 00:21:58.644 "rw_mbytes_per_sec": 0, 00:21:58.644 "r_mbytes_per_sec": 0, 00:21:58.644 "w_mbytes_per_sec": 0 00:21:58.644 }, 00:21:58.644 "claimed": false, 00:21:58.644 "zoned": false, 00:21:58.644 "supported_io_types": { 00:21:58.644 "read": true, 00:21:58.644 "write": true, 00:21:58.644 "unmap": false, 00:21:58.644 "flush": false, 00:21:58.644 "reset": true, 00:21:58.644 "nvme_admin": false, 00:21:58.644 "nvme_io": false, 00:21:58.644 "nvme_io_md": false, 00:21:58.644 "write_zeroes": true, 00:21:58.644 "zcopy": false, 00:21:58.644 "get_zone_info": false, 00:21:58.644 "zone_management": false, 00:21:58.644 "zone_append": false, 00:21:58.644 "compare": false, 00:21:58.644 "compare_and_write": false, 00:21:58.644 "abort": false, 00:21:58.644 "seek_hole": false, 00:21:58.644 "seek_data": false, 00:21:58.644 "copy": false, 00:21:58.644 "nvme_iov_md": false 00:21:58.644 }, 00:21:58.644 "memory_domains": [ 00:21:58.644 { 00:21:58.644 "dma_device_id": "system", 00:21:58.644 "dma_device_type": 1 00:21:58.644 }, 00:21:58.644 { 00:21:58.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.644 "dma_device_type": 2 00:21:58.644 }, 00:21:58.644 { 00:21:58.644 "dma_device_id": "system", 00:21:58.644 "dma_device_type": 1 00:21:58.644 }, 00:21:58.644 { 00:21:58.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.644 "dma_device_type": 2 00:21:58.644 }, 00:21:58.644 { 00:21:58.644 "dma_device_id": "system", 00:21:58.644 "dma_device_type": 1 00:21:58.644 }, 00:21:58.644 { 00:21:58.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.644 "dma_device_type": 2 00:21:58.644 }, 00:21:58.644 { 00:21:58.644 "dma_device_id": "system", 00:21:58.644 "dma_device_type": 1 00:21:58.644 }, 00:21:58.644 { 00:21:58.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.644 "dma_device_type": 2 00:21:58.644 } 00:21:58.644 ], 00:21:58.644 "driver_specific": { 00:21:58.644 "raid": { 00:21:58.644 "uuid": "f6aaad14-028d-45b4-888f-1fee62d201bd", 00:21:58.644 "strip_size_kb": 0, 00:21:58.644 "state": "online", 00:21:58.644 "raid_level": "raid1", 00:21:58.644 "superblock": true, 00:21:58.644 "num_base_bdevs": 4, 00:21:58.644 "num_base_bdevs_discovered": 4, 00:21:58.644 "num_base_bdevs_operational": 4, 00:21:58.644 "base_bdevs_list": [ 00:21:58.644 { 00:21:58.644 "name": "pt1", 00:21:58.644 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:58.644 "is_configured": true, 00:21:58.644 "data_offset": 2048, 00:21:58.644 "data_size": 63488 00:21:58.644 }, 00:21:58.644 { 00:21:58.644 "name": "pt2", 00:21:58.644 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:58.644 "is_configured": true, 00:21:58.644 "data_offset": 2048, 00:21:58.644 "data_size": 63488 00:21:58.644 }, 00:21:58.644 { 00:21:58.644 "name": "pt3", 00:21:58.644 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:58.644 "is_configured": true, 00:21:58.644 "data_offset": 2048, 00:21:58.644 "data_size": 63488 00:21:58.644 }, 00:21:58.644 { 00:21:58.644 "name": "pt4", 00:21:58.644 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:58.644 "is_configured": true, 00:21:58.644 "data_offset": 2048, 00:21:58.644 "data_size": 63488 00:21:58.644 } 00:21:58.644 ] 00:21:58.644 } 00:21:58.644 } 00:21:58.644 }' 00:21:58.644 18:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:58.644 18:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:58.644 pt2 00:21:58.644 pt3 00:21:58.644 pt4' 00:21:58.644 18:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:58.644 18:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:58.644 18:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:58.902 18:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:58.902 "name": "pt1", 00:21:58.902 "aliases": [ 00:21:58.902 "00000000-0000-0000-0000-000000000001" 00:21:58.902 ], 00:21:58.902 "product_name": "passthru", 00:21:58.902 "block_size": 512, 00:21:58.902 "num_blocks": 65536, 00:21:58.902 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:58.902 "assigned_rate_limits": { 00:21:58.902 "rw_ios_per_sec": 0, 00:21:58.902 "rw_mbytes_per_sec": 0, 00:21:58.902 "r_mbytes_per_sec": 0, 00:21:58.902 "w_mbytes_per_sec": 0 00:21:58.902 }, 00:21:58.902 "claimed": true, 00:21:58.902 "claim_type": "exclusive_write", 00:21:58.902 "zoned": false, 00:21:58.902 "supported_io_types": { 00:21:58.902 "read": true, 00:21:58.902 "write": true, 00:21:58.902 "unmap": true, 00:21:58.902 "flush": true, 00:21:58.902 "reset": true, 00:21:58.902 "nvme_admin": false, 00:21:58.902 "nvme_io": false, 00:21:58.902 "nvme_io_md": false, 00:21:58.902 "write_zeroes": true, 00:21:58.902 "zcopy": true, 00:21:58.902 "get_zone_info": false, 00:21:58.902 "zone_management": false, 00:21:58.902 "zone_append": false, 00:21:58.902 "compare": false, 00:21:58.902 "compare_and_write": false, 00:21:58.902 "abort": true, 00:21:58.902 "seek_hole": false, 00:21:58.902 "seek_data": false, 00:21:58.902 "copy": true, 00:21:58.902 "nvme_iov_md": false 00:21:58.902 }, 00:21:58.902 "memory_domains": [ 00:21:58.902 { 00:21:58.902 "dma_device_id": "system", 00:21:58.902 "dma_device_type": 1 00:21:58.902 }, 00:21:58.902 { 00:21:58.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.902 "dma_device_type": 2 00:21:58.902 } 00:21:58.902 ], 00:21:58.902 "driver_specific": { 00:21:58.902 "passthru": { 00:21:58.902 "name": "pt1", 00:21:58.902 "base_bdev_name": "malloc1" 00:21:58.902 } 00:21:58.902 } 00:21:58.902 }' 00:21:58.902 18:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:58.902 18:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:59.160 18:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:59.160 18:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:59.160 18:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:59.160 18:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:59.160 18:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:59.160 18:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:59.160 18:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:59.160 18:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:59.160 18:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:59.417 18:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:59.417 18:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:59.417 18:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:59.417 18:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:59.675 18:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:59.675 "name": "pt2", 00:21:59.675 "aliases": [ 00:21:59.675 "00000000-0000-0000-0000-000000000002" 00:21:59.675 ], 00:21:59.675 "product_name": "passthru", 00:21:59.675 "block_size": 512, 00:21:59.675 "num_blocks": 65536, 00:21:59.675 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:59.675 "assigned_rate_limits": { 00:21:59.675 "rw_ios_per_sec": 0, 00:21:59.675 "rw_mbytes_per_sec": 0, 00:21:59.675 "r_mbytes_per_sec": 0, 00:21:59.675 "w_mbytes_per_sec": 0 00:21:59.675 }, 00:21:59.675 "claimed": true, 00:21:59.675 "claim_type": "exclusive_write", 00:21:59.675 "zoned": false, 00:21:59.675 "supported_io_types": { 00:21:59.675 "read": true, 00:21:59.675 "write": true, 00:21:59.675 "unmap": true, 00:21:59.675 "flush": true, 00:21:59.675 "reset": true, 00:21:59.675 "nvme_admin": false, 00:21:59.675 "nvme_io": false, 00:21:59.675 "nvme_io_md": false, 00:21:59.675 "write_zeroes": true, 00:21:59.675 "zcopy": true, 00:21:59.675 "get_zone_info": false, 00:21:59.675 "zone_management": false, 00:21:59.675 "zone_append": false, 00:21:59.675 "compare": false, 00:21:59.675 "compare_and_write": false, 00:21:59.675 "abort": true, 00:21:59.675 "seek_hole": false, 00:21:59.675 "seek_data": false, 00:21:59.675 "copy": true, 00:21:59.675 "nvme_iov_md": false 00:21:59.675 }, 00:21:59.675 "memory_domains": [ 00:21:59.675 { 00:21:59.675 "dma_device_id": "system", 00:21:59.675 "dma_device_type": 1 00:21:59.675 }, 00:21:59.675 { 00:21:59.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:59.675 "dma_device_type": 2 00:21:59.675 } 00:21:59.675 ], 00:21:59.675 "driver_specific": { 00:21:59.675 "passthru": { 00:21:59.675 "name": "pt2", 00:21:59.675 "base_bdev_name": "malloc2" 00:21:59.675 } 00:21:59.675 } 00:21:59.675 }' 00:21:59.675 18:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:59.675 18:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:59.675 18:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:59.675 18:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:59.675 18:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:59.675 18:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:59.675 18:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:59.933 18:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:59.933 18:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:59.933 18:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:59.933 18:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:59.933 18:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:59.933 18:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:59.933 18:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:59.933 18:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:00.191 18:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:00.191 "name": "pt3", 00:22:00.191 "aliases": [ 00:22:00.191 "00000000-0000-0000-0000-000000000003" 00:22:00.191 ], 00:22:00.191 "product_name": "passthru", 00:22:00.191 "block_size": 512, 00:22:00.191 "num_blocks": 65536, 00:22:00.191 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:00.191 "assigned_rate_limits": { 00:22:00.191 "rw_ios_per_sec": 0, 00:22:00.191 "rw_mbytes_per_sec": 0, 00:22:00.191 "r_mbytes_per_sec": 0, 00:22:00.191 "w_mbytes_per_sec": 0 00:22:00.191 }, 00:22:00.191 "claimed": true, 00:22:00.191 "claim_type": "exclusive_write", 00:22:00.191 "zoned": false, 00:22:00.191 "supported_io_types": { 00:22:00.191 "read": true, 00:22:00.191 "write": true, 00:22:00.191 "unmap": true, 00:22:00.191 "flush": true, 00:22:00.191 "reset": true, 00:22:00.191 "nvme_admin": false, 00:22:00.191 "nvme_io": false, 00:22:00.191 "nvme_io_md": false, 00:22:00.191 "write_zeroes": true, 00:22:00.191 "zcopy": true, 00:22:00.191 "get_zone_info": false, 00:22:00.191 "zone_management": false, 00:22:00.191 "zone_append": false, 00:22:00.191 "compare": false, 00:22:00.191 "compare_and_write": false, 00:22:00.191 "abort": true, 00:22:00.191 "seek_hole": false, 00:22:00.191 "seek_data": false, 00:22:00.191 "copy": true, 00:22:00.191 "nvme_iov_md": false 00:22:00.191 }, 00:22:00.191 "memory_domains": [ 00:22:00.191 { 00:22:00.191 "dma_device_id": "system", 00:22:00.191 "dma_device_type": 1 00:22:00.191 }, 00:22:00.191 { 00:22:00.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.191 "dma_device_type": 2 00:22:00.191 } 00:22:00.191 ], 00:22:00.191 "driver_specific": { 00:22:00.191 "passthru": { 00:22:00.191 "name": "pt3", 00:22:00.191 "base_bdev_name": "malloc3" 00:22:00.191 } 00:22:00.191 } 00:22:00.191 }' 00:22:00.191 18:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:00.191 18:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:00.191 18:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:00.191 18:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:00.448 18:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:00.448 18:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:00.448 18:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:00.448 18:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:00.448 18:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:00.448 18:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:00.448 18:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:00.448 18:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:00.448 18:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:00.448 18:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:22:00.448 18:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:00.706 18:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:00.706 "name": "pt4", 00:22:00.706 "aliases": [ 00:22:00.706 "00000000-0000-0000-0000-000000000004" 00:22:00.706 ], 00:22:00.706 "product_name": "passthru", 00:22:00.706 "block_size": 512, 00:22:00.706 "num_blocks": 65536, 00:22:00.706 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:00.706 "assigned_rate_limits": { 00:22:00.706 "rw_ios_per_sec": 0, 00:22:00.706 "rw_mbytes_per_sec": 0, 00:22:00.706 "r_mbytes_per_sec": 0, 00:22:00.706 "w_mbytes_per_sec": 0 00:22:00.706 }, 00:22:00.706 "claimed": true, 00:22:00.706 "claim_type": "exclusive_write", 00:22:00.706 "zoned": false, 00:22:00.706 "supported_io_types": { 00:22:00.706 "read": true, 00:22:00.706 "write": true, 00:22:00.706 "unmap": true, 00:22:00.706 "flush": true, 00:22:00.706 "reset": true, 00:22:00.706 "nvme_admin": false, 00:22:00.706 "nvme_io": false, 00:22:00.706 "nvme_io_md": false, 00:22:00.706 "write_zeroes": true, 00:22:00.706 "zcopy": true, 00:22:00.706 "get_zone_info": false, 00:22:00.706 "zone_management": false, 00:22:00.706 "zone_append": false, 00:22:00.706 "compare": false, 00:22:00.706 "compare_and_write": false, 00:22:00.706 "abort": true, 00:22:00.706 "seek_hole": false, 00:22:00.706 "seek_data": false, 00:22:00.706 "copy": true, 00:22:00.706 "nvme_iov_md": false 00:22:00.706 }, 00:22:00.706 "memory_domains": [ 00:22:00.706 { 00:22:00.706 "dma_device_id": "system", 00:22:00.706 "dma_device_type": 1 00:22:00.706 }, 00:22:00.706 { 00:22:00.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.706 "dma_device_type": 2 00:22:00.706 } 00:22:00.706 ], 00:22:00.706 "driver_specific": { 00:22:00.706 "passthru": { 00:22:00.706 "name": "pt4", 00:22:00.706 "base_bdev_name": "malloc4" 00:22:00.706 } 00:22:00.706 } 00:22:00.706 }' 00:22:00.706 18:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:00.964 18:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:00.964 18:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:00.964 18:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:00.964 18:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:00.964 18:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:00.964 18:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:00.964 18:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:00.964 18:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:00.964 18:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:01.221 18:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:01.221 18:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:01.221 18:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:01.221 18:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:22:01.478 [2024-07-15 18:36:46.834349] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:01.478 18:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=f6aaad14-028d-45b4-888f-1fee62d201bd 00:22:01.478 18:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z f6aaad14-028d-45b4-888f-1fee62d201bd ']' 00:22:01.478 18:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:01.735 [2024-07-15 18:36:47.090709] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:01.735 [2024-07-15 18:36:47.090731] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:01.735 [2024-07-15 18:36:47.090777] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:01.735 [2024-07-15 18:36:47.090855] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:01.735 [2024-07-15 18:36:47.090865] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2186d40 name raid_bdev1, state offline 00:22:01.735 18:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.735 18:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:22:01.992 18:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:22:01.992 18:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:22:01.992 18:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:01.992 18:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:02.248 18:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:02.248 18:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:02.505 18:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:02.505 18:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:02.762 18:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:02.763 18:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:03.020 18:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:03.020 18:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:03.277 18:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:22:03.277 18:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:03.277 18:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:22:03.277 18:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:03.277 18:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:03.277 18:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:03.277 18:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:03.277 18:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:03.277 18:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:03.277 18:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:03.277 18:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:03.277 18:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:22:03.278 18:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:03.535 [2024-07-15 18:36:48.879539] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:03.536 [2024-07-15 18:36:48.880999] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:03.536 [2024-07-15 18:36:48.881050] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:03.536 [2024-07-15 18:36:48.881085] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:22:03.536 [2024-07-15 18:36:48.881127] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:03.536 [2024-07-15 18:36:48.881162] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:03.536 [2024-07-15 18:36:48.881182] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:03.536 [2024-07-15 18:36:48.881201] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:22:03.536 [2024-07-15 18:36:48.881215] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:03.536 [2024-07-15 18:36:48.881223] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2183100 name raid_bdev1, state configuring 00:22:03.536 request: 00:22:03.536 { 00:22:03.536 "name": "raid_bdev1", 00:22:03.536 "raid_level": "raid1", 00:22:03.536 "base_bdevs": [ 00:22:03.536 "malloc1", 00:22:03.536 "malloc2", 00:22:03.536 "malloc3", 00:22:03.536 "malloc4" 00:22:03.536 ], 00:22:03.536 "superblock": false, 00:22:03.536 "method": "bdev_raid_create", 00:22:03.536 "req_id": 1 00:22:03.536 } 00:22:03.536 Got JSON-RPC error response 00:22:03.536 response: 00:22:03.536 { 00:22:03.536 "code": -17, 00:22:03.536 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:03.536 } 00:22:03.536 18:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:22:03.536 18:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:03.536 18:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:03.536 18:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:03.536 18:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.536 18:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:22:03.794 18:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:22:03.794 18:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:22:03.794 18:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:04.052 [2024-07-15 18:36:49.392858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:04.052 [2024-07-15 18:36:49.392902] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:04.052 [2024-07-15 18:36:49.392920] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2184c60 00:22:04.052 [2024-07-15 18:36:49.392930] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:04.052 [2024-07-15 18:36:49.394593] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:04.052 [2024-07-15 18:36:49.394621] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:04.052 [2024-07-15 18:36:49.394682] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:04.052 [2024-07-15 18:36:49.394708] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:04.052 pt1 00:22:04.052 18:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:22:04.052 18:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:04.052 18:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:04.052 18:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:04.052 18:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:04.052 18:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:04.052 18:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:04.052 18:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:04.052 18:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:04.052 18:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:04.052 18:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.052 18:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:04.310 18:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:04.310 "name": "raid_bdev1", 00:22:04.310 "uuid": "f6aaad14-028d-45b4-888f-1fee62d201bd", 00:22:04.310 "strip_size_kb": 0, 00:22:04.310 "state": "configuring", 00:22:04.310 "raid_level": "raid1", 00:22:04.310 "superblock": true, 00:22:04.310 "num_base_bdevs": 4, 00:22:04.310 "num_base_bdevs_discovered": 1, 00:22:04.310 "num_base_bdevs_operational": 4, 00:22:04.310 "base_bdevs_list": [ 00:22:04.310 { 00:22:04.310 "name": "pt1", 00:22:04.310 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:04.310 "is_configured": true, 00:22:04.310 "data_offset": 2048, 00:22:04.310 "data_size": 63488 00:22:04.310 }, 00:22:04.310 { 00:22:04.310 "name": null, 00:22:04.310 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:04.310 "is_configured": false, 00:22:04.310 "data_offset": 2048, 00:22:04.310 "data_size": 63488 00:22:04.310 }, 00:22:04.310 { 00:22:04.310 "name": null, 00:22:04.310 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:04.310 "is_configured": false, 00:22:04.310 "data_offset": 2048, 00:22:04.310 "data_size": 63488 00:22:04.310 }, 00:22:04.310 { 00:22:04.310 "name": null, 00:22:04.310 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:04.310 "is_configured": false, 00:22:04.310 "data_offset": 2048, 00:22:04.310 "data_size": 63488 00:22:04.310 } 00:22:04.310 ] 00:22:04.310 }' 00:22:04.310 18:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:04.310 18:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.876 18:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:22:04.876 18:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:05.134 [2024-07-15 18:36:50.540000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:05.134 [2024-07-15 18:36:50.540055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:05.134 [2024-07-15 18:36:50.540072] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fd7aa0 00:22:05.134 [2024-07-15 18:36:50.540081] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:05.134 [2024-07-15 18:36:50.540421] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:05.134 [2024-07-15 18:36:50.540438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:05.134 [2024-07-15 18:36:50.540499] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:05.134 [2024-07-15 18:36:50.540517] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:05.134 pt2 00:22:05.134 18:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:05.392 [2024-07-15 18:36:50.800697] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:05.392 18:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:22:05.392 18:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:05.392 18:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:05.392 18:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:05.392 18:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:05.392 18:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:05.392 18:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:05.392 18:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:05.392 18:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:05.392 18:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:05.392 18:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.392 18:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.650 18:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:05.650 "name": "raid_bdev1", 00:22:05.650 "uuid": "f6aaad14-028d-45b4-888f-1fee62d201bd", 00:22:05.650 "strip_size_kb": 0, 00:22:05.650 "state": "configuring", 00:22:05.650 "raid_level": "raid1", 00:22:05.650 "superblock": true, 00:22:05.650 "num_base_bdevs": 4, 00:22:05.650 "num_base_bdevs_discovered": 1, 00:22:05.650 "num_base_bdevs_operational": 4, 00:22:05.650 "base_bdevs_list": [ 00:22:05.650 { 00:22:05.650 "name": "pt1", 00:22:05.650 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:05.650 "is_configured": true, 00:22:05.650 "data_offset": 2048, 00:22:05.650 "data_size": 63488 00:22:05.650 }, 00:22:05.650 { 00:22:05.650 "name": null, 00:22:05.650 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:05.650 "is_configured": false, 00:22:05.650 "data_offset": 2048, 00:22:05.650 "data_size": 63488 00:22:05.650 }, 00:22:05.650 { 00:22:05.650 "name": null, 00:22:05.650 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:05.650 "is_configured": false, 00:22:05.650 "data_offset": 2048, 00:22:05.650 "data_size": 63488 00:22:05.650 }, 00:22:05.650 { 00:22:05.650 "name": null, 00:22:05.650 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:05.650 "is_configured": false, 00:22:05.650 "data_offset": 2048, 00:22:05.650 "data_size": 63488 00:22:05.650 } 00:22:05.650 ] 00:22:05.650 }' 00:22:05.650 18:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:05.650 18:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:06.216 18:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:22:06.216 18:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:06.216 18:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:06.474 [2024-07-15 18:36:51.979861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:06.474 [2024-07-15 18:36:51.979910] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:06.474 [2024-07-15 18:36:51.979926] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fd7e30 00:22:06.474 [2024-07-15 18:36:51.979936] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:06.474 [2024-07-15 18:36:51.980284] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:06.474 [2024-07-15 18:36:51.980303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:06.474 [2024-07-15 18:36:51.980363] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:06.474 [2024-07-15 18:36:51.980382] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:06.474 pt2 00:22:06.474 18:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:22:06.474 18:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:06.474 18:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:06.768 [2024-07-15 18:36:52.236547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:06.768 [2024-07-15 18:36:52.236586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:06.768 [2024-07-15 18:36:52.236603] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2186fc0 00:22:06.768 [2024-07-15 18:36:52.236612] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:06.768 [2024-07-15 18:36:52.236928] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:06.768 [2024-07-15 18:36:52.236944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:06.768 [2024-07-15 18:36:52.237011] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:06.768 [2024-07-15 18:36:52.237036] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:06.768 pt3 00:22:06.768 18:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:22:06.768 18:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:06.768 18:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:07.026 [2024-07-15 18:36:52.481211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:07.026 [2024-07-15 18:36:52.481255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:07.026 [2024-07-15 18:36:52.481269] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2185030 00:22:07.026 [2024-07-15 18:36:52.481278] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:07.026 [2024-07-15 18:36:52.481602] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:07.026 [2024-07-15 18:36:52.481618] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:07.026 [2024-07-15 18:36:52.481674] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:07.026 [2024-07-15 18:36:52.481692] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:07.026 [2024-07-15 18:36:52.481819] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2185ee0 00:22:07.026 [2024-07-15 18:36:52.481828] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:07.026 [2024-07-15 18:36:52.482029] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x21891b0 00:22:07.026 [2024-07-15 18:36:52.482173] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2185ee0 00:22:07.026 [2024-07-15 18:36:52.482181] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2185ee0 00:22:07.026 [2024-07-15 18:36:52.482280] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:07.026 pt4 00:22:07.026 18:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:22:07.026 18:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:07.026 18:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:07.026 18:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:07.026 18:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:07.026 18:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:07.026 18:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:07.026 18:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:07.026 18:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:07.026 18:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:07.026 18:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:07.026 18:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:07.026 18:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.026 18:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.284 18:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:07.284 "name": "raid_bdev1", 00:22:07.284 "uuid": "f6aaad14-028d-45b4-888f-1fee62d201bd", 00:22:07.284 "strip_size_kb": 0, 00:22:07.284 "state": "online", 00:22:07.284 "raid_level": "raid1", 00:22:07.284 "superblock": true, 00:22:07.284 "num_base_bdevs": 4, 00:22:07.284 "num_base_bdevs_discovered": 4, 00:22:07.284 "num_base_bdevs_operational": 4, 00:22:07.284 "base_bdevs_list": [ 00:22:07.284 { 00:22:07.284 "name": "pt1", 00:22:07.284 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:07.284 "is_configured": true, 00:22:07.284 "data_offset": 2048, 00:22:07.284 "data_size": 63488 00:22:07.284 }, 00:22:07.284 { 00:22:07.284 "name": "pt2", 00:22:07.284 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:07.284 "is_configured": true, 00:22:07.284 "data_offset": 2048, 00:22:07.284 "data_size": 63488 00:22:07.285 }, 00:22:07.285 { 00:22:07.285 "name": "pt3", 00:22:07.285 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:07.285 "is_configured": true, 00:22:07.285 "data_offset": 2048, 00:22:07.285 "data_size": 63488 00:22:07.285 }, 00:22:07.285 { 00:22:07.285 "name": "pt4", 00:22:07.285 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:07.285 "is_configured": true, 00:22:07.285 "data_offset": 2048, 00:22:07.285 "data_size": 63488 00:22:07.285 } 00:22:07.285 ] 00:22:07.285 }' 00:22:07.285 18:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:07.285 18:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:07.849 18:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:22:07.849 18:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:07.849 18:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:07.849 18:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:07.849 18:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:07.849 18:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:07.849 18:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:07.849 18:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:08.106 [2024-07-15 18:36:53.516323] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:08.106 18:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:08.106 "name": "raid_bdev1", 00:22:08.106 "aliases": [ 00:22:08.106 "f6aaad14-028d-45b4-888f-1fee62d201bd" 00:22:08.106 ], 00:22:08.106 "product_name": "Raid Volume", 00:22:08.106 "block_size": 512, 00:22:08.106 "num_blocks": 63488, 00:22:08.106 "uuid": "f6aaad14-028d-45b4-888f-1fee62d201bd", 00:22:08.106 "assigned_rate_limits": { 00:22:08.106 "rw_ios_per_sec": 0, 00:22:08.106 "rw_mbytes_per_sec": 0, 00:22:08.106 "r_mbytes_per_sec": 0, 00:22:08.106 "w_mbytes_per_sec": 0 00:22:08.106 }, 00:22:08.106 "claimed": false, 00:22:08.106 "zoned": false, 00:22:08.106 "supported_io_types": { 00:22:08.106 "read": true, 00:22:08.106 "write": true, 00:22:08.106 "unmap": false, 00:22:08.106 "flush": false, 00:22:08.106 "reset": true, 00:22:08.106 "nvme_admin": false, 00:22:08.106 "nvme_io": false, 00:22:08.106 "nvme_io_md": false, 00:22:08.106 "write_zeroes": true, 00:22:08.106 "zcopy": false, 00:22:08.106 "get_zone_info": false, 00:22:08.106 "zone_management": false, 00:22:08.106 "zone_append": false, 00:22:08.106 "compare": false, 00:22:08.106 "compare_and_write": false, 00:22:08.106 "abort": false, 00:22:08.106 "seek_hole": false, 00:22:08.106 "seek_data": false, 00:22:08.106 "copy": false, 00:22:08.106 "nvme_iov_md": false 00:22:08.106 }, 00:22:08.106 "memory_domains": [ 00:22:08.106 { 00:22:08.106 "dma_device_id": "system", 00:22:08.106 "dma_device_type": 1 00:22:08.106 }, 00:22:08.106 { 00:22:08.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.106 "dma_device_type": 2 00:22:08.106 }, 00:22:08.106 { 00:22:08.106 "dma_device_id": "system", 00:22:08.106 "dma_device_type": 1 00:22:08.106 }, 00:22:08.106 { 00:22:08.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.106 "dma_device_type": 2 00:22:08.106 }, 00:22:08.106 { 00:22:08.106 "dma_device_id": "system", 00:22:08.106 "dma_device_type": 1 00:22:08.106 }, 00:22:08.106 { 00:22:08.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.106 "dma_device_type": 2 00:22:08.106 }, 00:22:08.106 { 00:22:08.106 "dma_device_id": "system", 00:22:08.106 "dma_device_type": 1 00:22:08.106 }, 00:22:08.106 { 00:22:08.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.106 "dma_device_type": 2 00:22:08.106 } 00:22:08.106 ], 00:22:08.106 "driver_specific": { 00:22:08.106 "raid": { 00:22:08.106 "uuid": "f6aaad14-028d-45b4-888f-1fee62d201bd", 00:22:08.106 "strip_size_kb": 0, 00:22:08.106 "state": "online", 00:22:08.106 "raid_level": "raid1", 00:22:08.106 "superblock": true, 00:22:08.106 "num_base_bdevs": 4, 00:22:08.106 "num_base_bdevs_discovered": 4, 00:22:08.106 "num_base_bdevs_operational": 4, 00:22:08.106 "base_bdevs_list": [ 00:22:08.106 { 00:22:08.106 "name": "pt1", 00:22:08.106 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:08.106 "is_configured": true, 00:22:08.106 "data_offset": 2048, 00:22:08.106 "data_size": 63488 00:22:08.106 }, 00:22:08.106 { 00:22:08.106 "name": "pt2", 00:22:08.106 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:08.106 "is_configured": true, 00:22:08.106 "data_offset": 2048, 00:22:08.106 "data_size": 63488 00:22:08.106 }, 00:22:08.106 { 00:22:08.106 "name": "pt3", 00:22:08.106 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:08.106 "is_configured": true, 00:22:08.106 "data_offset": 2048, 00:22:08.106 "data_size": 63488 00:22:08.106 }, 00:22:08.106 { 00:22:08.106 "name": "pt4", 00:22:08.106 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:08.106 "is_configured": true, 00:22:08.106 "data_offset": 2048, 00:22:08.106 "data_size": 63488 00:22:08.106 } 00:22:08.106 ] 00:22:08.106 } 00:22:08.106 } 00:22:08.106 }' 00:22:08.106 18:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:08.106 18:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:08.106 pt2 00:22:08.106 pt3 00:22:08.106 pt4' 00:22:08.106 18:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:08.106 18:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:08.106 18:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:08.364 18:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:08.364 "name": "pt1", 00:22:08.364 "aliases": [ 00:22:08.364 "00000000-0000-0000-0000-000000000001" 00:22:08.364 ], 00:22:08.364 "product_name": "passthru", 00:22:08.364 "block_size": 512, 00:22:08.364 "num_blocks": 65536, 00:22:08.364 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:08.364 "assigned_rate_limits": { 00:22:08.364 "rw_ios_per_sec": 0, 00:22:08.364 "rw_mbytes_per_sec": 0, 00:22:08.364 "r_mbytes_per_sec": 0, 00:22:08.364 "w_mbytes_per_sec": 0 00:22:08.364 }, 00:22:08.364 "claimed": true, 00:22:08.364 "claim_type": "exclusive_write", 00:22:08.364 "zoned": false, 00:22:08.364 "supported_io_types": { 00:22:08.364 "read": true, 00:22:08.364 "write": true, 00:22:08.364 "unmap": true, 00:22:08.364 "flush": true, 00:22:08.364 "reset": true, 00:22:08.364 "nvme_admin": false, 00:22:08.364 "nvme_io": false, 00:22:08.364 "nvme_io_md": false, 00:22:08.364 "write_zeroes": true, 00:22:08.364 "zcopy": true, 00:22:08.364 "get_zone_info": false, 00:22:08.364 "zone_management": false, 00:22:08.364 "zone_append": false, 00:22:08.364 "compare": false, 00:22:08.364 "compare_and_write": false, 00:22:08.364 "abort": true, 00:22:08.364 "seek_hole": false, 00:22:08.364 "seek_data": false, 00:22:08.364 "copy": true, 00:22:08.364 "nvme_iov_md": false 00:22:08.364 }, 00:22:08.364 "memory_domains": [ 00:22:08.364 { 00:22:08.364 "dma_device_id": "system", 00:22:08.364 "dma_device_type": 1 00:22:08.364 }, 00:22:08.364 { 00:22:08.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.364 "dma_device_type": 2 00:22:08.364 } 00:22:08.364 ], 00:22:08.364 "driver_specific": { 00:22:08.364 "passthru": { 00:22:08.364 "name": "pt1", 00:22:08.364 "base_bdev_name": "malloc1" 00:22:08.364 } 00:22:08.364 } 00:22:08.364 }' 00:22:08.364 18:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:08.364 18:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:08.621 18:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:08.621 18:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:08.622 18:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:08.622 18:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:08.622 18:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:08.622 18:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:08.622 18:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:08.622 18:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:08.622 18:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:08.879 18:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:08.879 18:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:08.879 18:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:08.880 18:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:09.137 18:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:09.137 "name": "pt2", 00:22:09.137 "aliases": [ 00:22:09.137 "00000000-0000-0000-0000-000000000002" 00:22:09.137 ], 00:22:09.137 "product_name": "passthru", 00:22:09.137 "block_size": 512, 00:22:09.137 "num_blocks": 65536, 00:22:09.137 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:09.137 "assigned_rate_limits": { 00:22:09.137 "rw_ios_per_sec": 0, 00:22:09.137 "rw_mbytes_per_sec": 0, 00:22:09.137 "r_mbytes_per_sec": 0, 00:22:09.137 "w_mbytes_per_sec": 0 00:22:09.137 }, 00:22:09.137 "claimed": true, 00:22:09.137 "claim_type": "exclusive_write", 00:22:09.137 "zoned": false, 00:22:09.137 "supported_io_types": { 00:22:09.137 "read": true, 00:22:09.137 "write": true, 00:22:09.137 "unmap": true, 00:22:09.137 "flush": true, 00:22:09.137 "reset": true, 00:22:09.137 "nvme_admin": false, 00:22:09.138 "nvme_io": false, 00:22:09.138 "nvme_io_md": false, 00:22:09.138 "write_zeroes": true, 00:22:09.138 "zcopy": true, 00:22:09.138 "get_zone_info": false, 00:22:09.138 "zone_management": false, 00:22:09.138 "zone_append": false, 00:22:09.138 "compare": false, 00:22:09.138 "compare_and_write": false, 00:22:09.138 "abort": true, 00:22:09.138 "seek_hole": false, 00:22:09.138 "seek_data": false, 00:22:09.138 "copy": true, 00:22:09.138 "nvme_iov_md": false 00:22:09.138 }, 00:22:09.138 "memory_domains": [ 00:22:09.138 { 00:22:09.138 "dma_device_id": "system", 00:22:09.138 "dma_device_type": 1 00:22:09.138 }, 00:22:09.138 { 00:22:09.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.138 "dma_device_type": 2 00:22:09.138 } 00:22:09.138 ], 00:22:09.138 "driver_specific": { 00:22:09.138 "passthru": { 00:22:09.138 "name": "pt2", 00:22:09.138 "base_bdev_name": "malloc2" 00:22:09.138 } 00:22:09.138 } 00:22:09.138 }' 00:22:09.138 18:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:09.138 18:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:09.138 18:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:09.138 18:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:09.138 18:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:09.138 18:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:09.138 18:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:09.396 18:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:09.396 18:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:09.396 18:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:09.396 18:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:09.396 18:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:09.396 18:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:09.396 18:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:09.396 18:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:09.655 18:36:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:09.655 "name": "pt3", 00:22:09.655 "aliases": [ 00:22:09.655 "00000000-0000-0000-0000-000000000003" 00:22:09.655 ], 00:22:09.655 "product_name": "passthru", 00:22:09.655 "block_size": 512, 00:22:09.655 "num_blocks": 65536, 00:22:09.655 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:09.655 "assigned_rate_limits": { 00:22:09.655 "rw_ios_per_sec": 0, 00:22:09.655 "rw_mbytes_per_sec": 0, 00:22:09.655 "r_mbytes_per_sec": 0, 00:22:09.655 "w_mbytes_per_sec": 0 00:22:09.655 }, 00:22:09.655 "claimed": true, 00:22:09.655 "claim_type": "exclusive_write", 00:22:09.655 "zoned": false, 00:22:09.655 "supported_io_types": { 00:22:09.655 "read": true, 00:22:09.655 "write": true, 00:22:09.655 "unmap": true, 00:22:09.655 "flush": true, 00:22:09.655 "reset": true, 00:22:09.655 "nvme_admin": false, 00:22:09.655 "nvme_io": false, 00:22:09.655 "nvme_io_md": false, 00:22:09.655 "write_zeroes": true, 00:22:09.655 "zcopy": true, 00:22:09.655 "get_zone_info": false, 00:22:09.655 "zone_management": false, 00:22:09.655 "zone_append": false, 00:22:09.655 "compare": false, 00:22:09.655 "compare_and_write": false, 00:22:09.655 "abort": true, 00:22:09.655 "seek_hole": false, 00:22:09.655 "seek_data": false, 00:22:09.655 "copy": true, 00:22:09.655 "nvme_iov_md": false 00:22:09.655 }, 00:22:09.655 "memory_domains": [ 00:22:09.655 { 00:22:09.655 "dma_device_id": "system", 00:22:09.655 "dma_device_type": 1 00:22:09.655 }, 00:22:09.655 { 00:22:09.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.655 "dma_device_type": 2 00:22:09.655 } 00:22:09.655 ], 00:22:09.655 "driver_specific": { 00:22:09.655 "passthru": { 00:22:09.655 "name": "pt3", 00:22:09.655 "base_bdev_name": "malloc3" 00:22:09.655 } 00:22:09.655 } 00:22:09.655 }' 00:22:09.655 18:36:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:09.655 18:36:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:09.913 18:36:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:09.913 18:36:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:09.913 18:36:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:09.913 18:36:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:09.913 18:36:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:09.913 18:36:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:09.913 18:36:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:09.913 18:36:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:09.913 18:36:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:10.170 18:36:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:10.170 18:36:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:10.170 18:36:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:22:10.170 18:36:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:10.736 18:36:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:10.736 "name": "pt4", 00:22:10.736 "aliases": [ 00:22:10.736 "00000000-0000-0000-0000-000000000004" 00:22:10.736 ], 00:22:10.736 "product_name": "passthru", 00:22:10.736 "block_size": 512, 00:22:10.736 "num_blocks": 65536, 00:22:10.736 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:10.736 "assigned_rate_limits": { 00:22:10.736 "rw_ios_per_sec": 0, 00:22:10.736 "rw_mbytes_per_sec": 0, 00:22:10.736 "r_mbytes_per_sec": 0, 00:22:10.736 "w_mbytes_per_sec": 0 00:22:10.736 }, 00:22:10.736 "claimed": true, 00:22:10.736 "claim_type": "exclusive_write", 00:22:10.736 "zoned": false, 00:22:10.736 "supported_io_types": { 00:22:10.736 "read": true, 00:22:10.736 "write": true, 00:22:10.736 "unmap": true, 00:22:10.736 "flush": true, 00:22:10.736 "reset": true, 00:22:10.736 "nvme_admin": false, 00:22:10.736 "nvme_io": false, 00:22:10.736 "nvme_io_md": false, 00:22:10.736 "write_zeroes": true, 00:22:10.736 "zcopy": true, 00:22:10.736 "get_zone_info": false, 00:22:10.736 "zone_management": false, 00:22:10.736 "zone_append": false, 00:22:10.736 "compare": false, 00:22:10.736 "compare_and_write": false, 00:22:10.736 "abort": true, 00:22:10.736 "seek_hole": false, 00:22:10.736 "seek_data": false, 00:22:10.736 "copy": true, 00:22:10.736 "nvme_iov_md": false 00:22:10.736 }, 00:22:10.736 "memory_domains": [ 00:22:10.736 { 00:22:10.736 "dma_device_id": "system", 00:22:10.736 "dma_device_type": 1 00:22:10.736 }, 00:22:10.736 { 00:22:10.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:10.736 "dma_device_type": 2 00:22:10.736 } 00:22:10.736 ], 00:22:10.736 "driver_specific": { 00:22:10.736 "passthru": { 00:22:10.736 "name": "pt4", 00:22:10.736 "base_bdev_name": "malloc4" 00:22:10.736 } 00:22:10.736 } 00:22:10.736 }' 00:22:10.736 18:36:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:10.736 18:36:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:10.736 18:36:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:10.736 18:36:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:10.736 18:36:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:10.736 18:36:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:10.736 18:36:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:10.736 18:36:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:10.736 18:36:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:10.736 18:36:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:10.736 18:36:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:10.994 18:36:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:10.995 18:36:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:10.995 18:36:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:22:11.253 [2024-07-15 18:36:56.564524] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:11.253 18:36:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' f6aaad14-028d-45b4-888f-1fee62d201bd '!=' f6aaad14-028d-45b4-888f-1fee62d201bd ']' 00:22:11.253 18:36:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:22:11.253 18:36:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:11.253 18:36:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:22:11.253 18:36:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:11.513 [2024-07-15 18:36:56.820915] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:11.513 18:36:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:11.513 18:36:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:11.513 18:36:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:11.513 18:36:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:11.513 18:36:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:11.513 18:36:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:11.513 18:36:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:11.513 18:36:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:11.513 18:36:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:11.513 18:36:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:11.513 18:36:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.513 18:36:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.804 18:36:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:11.804 "name": "raid_bdev1", 00:22:11.804 "uuid": "f6aaad14-028d-45b4-888f-1fee62d201bd", 00:22:11.804 "strip_size_kb": 0, 00:22:11.804 "state": "online", 00:22:11.804 "raid_level": "raid1", 00:22:11.804 "superblock": true, 00:22:11.804 "num_base_bdevs": 4, 00:22:11.804 "num_base_bdevs_discovered": 3, 00:22:11.804 "num_base_bdevs_operational": 3, 00:22:11.804 "base_bdevs_list": [ 00:22:11.804 { 00:22:11.804 "name": null, 00:22:11.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.804 "is_configured": false, 00:22:11.804 "data_offset": 2048, 00:22:11.804 "data_size": 63488 00:22:11.804 }, 00:22:11.804 { 00:22:11.804 "name": "pt2", 00:22:11.804 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:11.804 "is_configured": true, 00:22:11.804 "data_offset": 2048, 00:22:11.804 "data_size": 63488 00:22:11.804 }, 00:22:11.804 { 00:22:11.804 "name": "pt3", 00:22:11.804 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:11.804 "is_configured": true, 00:22:11.804 "data_offset": 2048, 00:22:11.804 "data_size": 63488 00:22:11.804 }, 00:22:11.804 { 00:22:11.804 "name": "pt4", 00:22:11.804 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:11.804 "is_configured": true, 00:22:11.804 "data_offset": 2048, 00:22:11.804 "data_size": 63488 00:22:11.804 } 00:22:11.804 ] 00:22:11.804 }' 00:22:11.804 18:36:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:11.804 18:36:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.385 18:36:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:12.643 [2024-07-15 18:36:57.943896] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:12.643 [2024-07-15 18:36:57.943921] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:12.643 [2024-07-15 18:36:57.943973] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:12.643 [2024-07-15 18:36:57.944039] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:12.643 [2024-07-15 18:36:57.944048] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2185ee0 name raid_bdev1, state offline 00:22:12.643 18:36:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:22:12.643 18:36:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.901 18:36:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:22:12.901 18:36:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:22:12.901 18:36:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:22:12.901 18:36:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:22:12.901 18:36:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:13.159 18:36:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:22:13.159 18:36:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:22:13.159 18:36:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:13.159 18:36:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:22:13.159 18:36:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:22:13.159 18:36:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:13.417 18:36:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:22:13.417 18:36:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:22:13.417 18:36:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:22:13.417 18:36:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:22:13.417 18:36:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:13.676 [2024-07-15 18:36:59.171128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:13.676 [2024-07-15 18:36:59.171170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:13.676 [2024-07-15 18:36:59.171184] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2188eb0 00:22:13.676 [2024-07-15 18:36:59.171194] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:13.676 [2024-07-15 18:36:59.172846] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:13.676 [2024-07-15 18:36:59.172874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:13.676 [2024-07-15 18:36:59.172933] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:13.676 [2024-07-15 18:36:59.172966] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:13.676 pt2 00:22:13.676 18:36:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:13.676 18:36:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:13.676 18:36:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:13.676 18:36:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:13.676 18:36:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:13.676 18:36:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:13.676 18:36:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:13.676 18:36:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:13.676 18:36:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:13.676 18:36:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:13.676 18:36:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.676 18:36:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.935 18:36:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:13.935 "name": "raid_bdev1", 00:22:13.935 "uuid": "f6aaad14-028d-45b4-888f-1fee62d201bd", 00:22:13.935 "strip_size_kb": 0, 00:22:13.935 "state": "configuring", 00:22:13.935 "raid_level": "raid1", 00:22:13.935 "superblock": true, 00:22:13.935 "num_base_bdevs": 4, 00:22:13.935 "num_base_bdevs_discovered": 1, 00:22:13.935 "num_base_bdevs_operational": 3, 00:22:13.935 "base_bdevs_list": [ 00:22:13.935 { 00:22:13.935 "name": null, 00:22:13.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.935 "is_configured": false, 00:22:13.935 "data_offset": 2048, 00:22:13.935 "data_size": 63488 00:22:13.935 }, 00:22:13.935 { 00:22:13.935 "name": "pt2", 00:22:13.935 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:13.935 "is_configured": true, 00:22:13.935 "data_offset": 2048, 00:22:13.935 "data_size": 63488 00:22:13.935 }, 00:22:13.935 { 00:22:13.935 "name": null, 00:22:13.935 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:13.935 "is_configured": false, 00:22:13.935 "data_offset": 2048, 00:22:13.935 "data_size": 63488 00:22:13.935 }, 00:22:13.935 { 00:22:13.935 "name": null, 00:22:13.935 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:13.935 "is_configured": false, 00:22:13.935 "data_offset": 2048, 00:22:13.935 "data_size": 63488 00:22:13.935 } 00:22:13.935 ] 00:22:13.935 }' 00:22:13.935 18:36:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:13.935 18:36:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.869 18:37:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:22:14.869 18:37:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:22:14.869 18:37:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:15.127 [2024-07-15 18:37:00.438544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:15.127 [2024-07-15 18:37:00.438589] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:15.127 [2024-07-15 18:37:00.438605] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fd9a40 00:22:15.127 [2024-07-15 18:37:00.438614] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:15.127 [2024-07-15 18:37:00.438986] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:15.127 [2024-07-15 18:37:00.439004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:15.127 [2024-07-15 18:37:00.439066] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:15.127 [2024-07-15 18:37:00.439084] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:15.127 pt3 00:22:15.127 18:37:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:15.127 18:37:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:15.127 18:37:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:15.127 18:37:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:15.127 18:37:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:15.127 18:37:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:15.127 18:37:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:15.127 18:37:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:15.127 18:37:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:15.127 18:37:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:15.127 18:37:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.127 18:37:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.386 18:37:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:15.386 "name": "raid_bdev1", 00:22:15.386 "uuid": "f6aaad14-028d-45b4-888f-1fee62d201bd", 00:22:15.386 "strip_size_kb": 0, 00:22:15.386 "state": "configuring", 00:22:15.386 "raid_level": "raid1", 00:22:15.386 "superblock": true, 00:22:15.386 "num_base_bdevs": 4, 00:22:15.386 "num_base_bdevs_discovered": 2, 00:22:15.386 "num_base_bdevs_operational": 3, 00:22:15.386 "base_bdevs_list": [ 00:22:15.386 { 00:22:15.386 "name": null, 00:22:15.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.386 "is_configured": false, 00:22:15.386 "data_offset": 2048, 00:22:15.386 "data_size": 63488 00:22:15.386 }, 00:22:15.386 { 00:22:15.386 "name": "pt2", 00:22:15.386 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:15.386 "is_configured": true, 00:22:15.386 "data_offset": 2048, 00:22:15.386 "data_size": 63488 00:22:15.386 }, 00:22:15.386 { 00:22:15.386 "name": "pt3", 00:22:15.386 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:15.386 "is_configured": true, 00:22:15.386 "data_offset": 2048, 00:22:15.386 "data_size": 63488 00:22:15.386 }, 00:22:15.386 { 00:22:15.386 "name": null, 00:22:15.386 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:15.386 "is_configured": false, 00:22:15.386 "data_offset": 2048, 00:22:15.386 "data_size": 63488 00:22:15.386 } 00:22:15.386 ] 00:22:15.386 }' 00:22:15.386 18:37:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:15.386 18:37:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.952 18:37:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:22:15.952 18:37:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:22:15.952 18:37:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:22:15.952 18:37:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:16.211 [2024-07-15 18:37:01.621711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:16.211 [2024-07-15 18:37:01.621757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:16.211 [2024-07-15 18:37:01.621773] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2188600 00:22:16.211 [2024-07-15 18:37:01.621782] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:16.211 [2024-07-15 18:37:01.622120] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:16.211 [2024-07-15 18:37:01.622138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:16.211 [2024-07-15 18:37:01.622196] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:16.211 [2024-07-15 18:37:01.622214] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:16.211 [2024-07-15 18:37:01.622328] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1fd7d50 00:22:16.211 [2024-07-15 18:37:01.622337] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:16.211 [2024-07-15 18:37:01.622517] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x206f9f0 00:22:16.211 [2024-07-15 18:37:01.622656] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1fd7d50 00:22:16.211 [2024-07-15 18:37:01.622664] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1fd7d50 00:22:16.211 [2024-07-15 18:37:01.622761] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:16.211 pt4 00:22:16.211 18:37:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:16.211 18:37:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:16.211 18:37:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:16.211 18:37:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:16.211 18:37:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:16.211 18:37:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:16.211 18:37:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:16.211 18:37:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:16.211 18:37:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:16.211 18:37:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:16.211 18:37:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.211 18:37:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.469 18:37:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:16.469 "name": "raid_bdev1", 00:22:16.469 "uuid": "f6aaad14-028d-45b4-888f-1fee62d201bd", 00:22:16.469 "strip_size_kb": 0, 00:22:16.469 "state": "online", 00:22:16.469 "raid_level": "raid1", 00:22:16.469 "superblock": true, 00:22:16.470 "num_base_bdevs": 4, 00:22:16.470 "num_base_bdevs_discovered": 3, 00:22:16.470 "num_base_bdevs_operational": 3, 00:22:16.470 "base_bdevs_list": [ 00:22:16.470 { 00:22:16.470 "name": null, 00:22:16.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.470 "is_configured": false, 00:22:16.470 "data_offset": 2048, 00:22:16.470 "data_size": 63488 00:22:16.470 }, 00:22:16.470 { 00:22:16.470 "name": "pt2", 00:22:16.470 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:16.470 "is_configured": true, 00:22:16.470 "data_offset": 2048, 00:22:16.470 "data_size": 63488 00:22:16.470 }, 00:22:16.470 { 00:22:16.470 "name": "pt3", 00:22:16.470 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:16.470 "is_configured": true, 00:22:16.470 "data_offset": 2048, 00:22:16.470 "data_size": 63488 00:22:16.470 }, 00:22:16.470 { 00:22:16.470 "name": "pt4", 00:22:16.470 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:16.470 "is_configured": true, 00:22:16.470 "data_offset": 2048, 00:22:16.470 "data_size": 63488 00:22:16.470 } 00:22:16.470 ] 00:22:16.470 }' 00:22:16.470 18:37:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:16.470 18:37:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.036 18:37:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:17.294 [2024-07-15 18:37:02.712625] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:17.294 [2024-07-15 18:37:02.712651] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:17.294 [2024-07-15 18:37:02.712700] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:17.294 [2024-07-15 18:37:02.712764] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:17.294 [2024-07-15 18:37:02.712773] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fd7d50 name raid_bdev1, state offline 00:22:17.294 18:37:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.294 18:37:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:22:17.552 18:37:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:22:17.553 18:37:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:22:17.553 18:37:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:22:17.553 18:37:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:22:17.553 18:37:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:17.811 18:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:18.069 [2024-07-15 18:37:03.486654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:18.069 [2024-07-15 18:37:03.486694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:18.069 [2024-07-15 18:37:03.486710] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21896a0 00:22:18.069 [2024-07-15 18:37:03.486719] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:18.069 [2024-07-15 18:37:03.488376] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:18.069 [2024-07-15 18:37:03.488402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:18.069 [2024-07-15 18:37:03.488458] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:18.069 [2024-07-15 18:37:03.488482] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:18.069 [2024-07-15 18:37:03.488580] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:18.069 [2024-07-15 18:37:03.488591] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:18.069 [2024-07-15 18:37:03.488602] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x206fdd0 name raid_bdev1, state configuring 00:22:18.069 [2024-07-15 18:37:03.488629] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:18.069 [2024-07-15 18:37:03.488708] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:18.069 pt1 00:22:18.069 18:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:22:18.069 18:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:18.069 18:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:18.069 18:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:18.069 18:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:18.069 18:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:18.069 18:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:18.069 18:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:18.069 18:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:18.069 18:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:18.069 18:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:18.069 18:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.069 18:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.327 18:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:18.327 "name": "raid_bdev1", 00:22:18.327 "uuid": "f6aaad14-028d-45b4-888f-1fee62d201bd", 00:22:18.327 "strip_size_kb": 0, 00:22:18.327 "state": "configuring", 00:22:18.327 "raid_level": "raid1", 00:22:18.327 "superblock": true, 00:22:18.327 "num_base_bdevs": 4, 00:22:18.327 "num_base_bdevs_discovered": 2, 00:22:18.327 "num_base_bdevs_operational": 3, 00:22:18.327 "base_bdevs_list": [ 00:22:18.327 { 00:22:18.327 "name": null, 00:22:18.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.327 "is_configured": false, 00:22:18.327 "data_offset": 2048, 00:22:18.327 "data_size": 63488 00:22:18.327 }, 00:22:18.327 { 00:22:18.327 "name": "pt2", 00:22:18.327 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:18.327 "is_configured": true, 00:22:18.327 "data_offset": 2048, 00:22:18.327 "data_size": 63488 00:22:18.327 }, 00:22:18.327 { 00:22:18.327 "name": "pt3", 00:22:18.327 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:18.327 "is_configured": true, 00:22:18.327 "data_offset": 2048, 00:22:18.327 "data_size": 63488 00:22:18.327 }, 00:22:18.327 { 00:22:18.327 "name": null, 00:22:18.327 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:18.327 "is_configured": false, 00:22:18.327 "data_offset": 2048, 00:22:18.327 "data_size": 63488 00:22:18.327 } 00:22:18.327 ] 00:22:18.327 }' 00:22:18.327 18:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:18.327 18:37:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.263 18:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:22:19.263 18:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:19.263 18:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:22:19.263 18:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:19.522 [2024-07-15 18:37:05.014793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:19.522 [2024-07-15 18:37:05.014843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:19.522 [2024-07-15 18:37:05.014860] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2188070 00:22:19.522 [2024-07-15 18:37:05.014869] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:19.523 [2024-07-15 18:37:05.015227] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:19.523 [2024-07-15 18:37:05.015246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:19.523 [2024-07-15 18:37:05.015315] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:19.523 [2024-07-15 18:37:05.015334] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:19.523 [2024-07-15 18:37:05.015450] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1fd9050 00:22:19.523 [2024-07-15 18:37:05.015459] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:19.523 [2024-07-15 18:37:05.015638] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x206f910 00:22:19.523 [2024-07-15 18:37:05.015777] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1fd9050 00:22:19.523 [2024-07-15 18:37:05.015785] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1fd9050 00:22:19.523 [2024-07-15 18:37:05.015885] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:19.523 pt4 00:22:19.523 18:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:19.523 18:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:19.523 18:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:19.523 18:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:19.523 18:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:19.523 18:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:19.523 18:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:19.523 18:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:19.523 18:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:19.523 18:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:19.523 18:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:19.523 18:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.782 18:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:19.782 "name": "raid_bdev1", 00:22:19.782 "uuid": "f6aaad14-028d-45b4-888f-1fee62d201bd", 00:22:19.782 "strip_size_kb": 0, 00:22:19.782 "state": "online", 00:22:19.782 "raid_level": "raid1", 00:22:19.782 "superblock": true, 00:22:19.782 "num_base_bdevs": 4, 00:22:19.782 "num_base_bdevs_discovered": 3, 00:22:19.782 "num_base_bdevs_operational": 3, 00:22:19.782 "base_bdevs_list": [ 00:22:19.782 { 00:22:19.782 "name": null, 00:22:19.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.782 "is_configured": false, 00:22:19.782 "data_offset": 2048, 00:22:19.782 "data_size": 63488 00:22:19.782 }, 00:22:19.782 { 00:22:19.782 "name": "pt2", 00:22:19.782 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:19.782 "is_configured": true, 00:22:19.782 "data_offset": 2048, 00:22:19.782 "data_size": 63488 00:22:19.782 }, 00:22:19.782 { 00:22:19.782 "name": "pt3", 00:22:19.782 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:19.782 "is_configured": true, 00:22:19.782 "data_offset": 2048, 00:22:19.782 "data_size": 63488 00:22:19.782 }, 00:22:19.782 { 00:22:19.782 "name": "pt4", 00:22:19.782 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:19.782 "is_configured": true, 00:22:19.782 "data_offset": 2048, 00:22:19.782 "data_size": 63488 00:22:19.782 } 00:22:19.782 ] 00:22:19.782 }' 00:22:19.782 18:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:19.782 18:37:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.718 18:37:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:22:20.718 18:37:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:20.976 18:37:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:22:20.976 18:37:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:20.976 18:37:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:22:21.233 [2024-07-15 18:37:06.679556] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:21.233 18:37:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' f6aaad14-028d-45b4-888f-1fee62d201bd '!=' f6aaad14-028d-45b4-888f-1fee62d201bd ']' 00:22:21.233 18:37:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2881187 00:22:21.233 18:37:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2881187 ']' 00:22:21.233 18:37:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2881187 00:22:21.233 18:37:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:22:21.233 18:37:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:21.233 18:37:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2881187 00:22:21.233 18:37:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:21.233 18:37:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:21.233 18:37:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2881187' 00:22:21.233 killing process with pid 2881187 00:22:21.233 18:37:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2881187 00:22:21.233 [2024-07-15 18:37:06.741207] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:21.233 [2024-07-15 18:37:06.741263] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:21.233 [2024-07-15 18:37:06.741329] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:21.233 [2024-07-15 18:37:06.741338] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fd9050 name raid_bdev1, state offline 00:22:21.233 18:37:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2881187 00:22:21.233 [2024-07-15 18:37:06.777128] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:21.491 18:37:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:22:21.491 00:22:21.491 real 0m27.336s 00:22:21.491 user 0m51.385s 00:22:21.491 sys 0m3.691s 00:22:21.491 18:37:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:21.491 18:37:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.491 ************************************ 00:22:21.491 END TEST raid_superblock_test 00:22:21.491 ************************************ 00:22:21.491 18:37:07 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:21.491 18:37:07 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:22:21.491 18:37:07 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:21.491 18:37:07 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:21.491 18:37:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:21.491 ************************************ 00:22:21.491 START TEST raid_read_error_test 00:22:21.491 ************************************ 00:22:21.491 18:37:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 read 00:22:21.491 18:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:22:21.491 18:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:22:21.491 18:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:22:21.491 18:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:22:21.491 18:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:21.491 18:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:22:21.491 18:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:21.491 18:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:21.491 18:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:22:21.491 18:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:21.492 18:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:21.492 18:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:22:21.492 18:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:21.492 18:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:21.492 18:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:22:21.492 18:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:21.492 18:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:21.492 18:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:21.492 18:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:22:21.492 18:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:22:21.492 18:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:22:21.492 18:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:22:21.492 18:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:22:21.492 18:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:22:21.492 18:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:22:21.492 18:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:22:21.492 18:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:22:21.492 18:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.qXThExkbOy 00:22:21.492 18:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2885947 00:22:21.492 18:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2885947 /var/tmp/spdk-raid.sock 00:22:21.492 18:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:21.492 18:37:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2885947 ']' 00:22:21.492 18:37:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:21.492 18:37:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:21.492 18:37:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:21.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:21.492 18:37:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:21.492 18:37:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.750 [2024-07-15 18:37:07.089314] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:22:21.750 [2024-07-15 18:37:07.089374] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2885947 ] 00:22:21.750 [2024-07-15 18:37:07.189564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.750 [2024-07-15 18:37:07.284696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.007 [2024-07-15 18:37:07.340510] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:22.007 [2024-07-15 18:37:07.340536] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:22.571 18:37:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:22.571 18:37:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:22:22.571 18:37:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:22.571 18:37:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:22.828 BaseBdev1_malloc 00:22:22.828 18:37:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:22:23.085 true 00:22:23.085 18:37:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:23.341 [2024-07-15 18:37:08.801799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:23.341 [2024-07-15 18:37:08.801839] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:23.341 [2024-07-15 18:37:08.801861] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2673d20 00:22:23.341 [2024-07-15 18:37:08.801871] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:23.341 [2024-07-15 18:37:08.803666] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:23.341 [2024-07-15 18:37:08.803695] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:23.341 BaseBdev1 00:22:23.341 18:37:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:23.341 18:37:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:23.597 BaseBdev2_malloc 00:22:23.597 18:37:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:22:23.854 true 00:22:23.854 18:37:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:24.111 [2024-07-15 18:37:09.572463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:24.111 [2024-07-15 18:37:09.572504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:24.111 [2024-07-15 18:37:09.572525] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2678d50 00:22:24.111 [2024-07-15 18:37:09.572534] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:24.111 [2024-07-15 18:37:09.574163] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:24.111 [2024-07-15 18:37:09.574189] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:24.111 BaseBdev2 00:22:24.111 18:37:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:24.111 18:37:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:24.367 BaseBdev3_malloc 00:22:24.367 18:37:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:22:24.624 true 00:22:24.624 18:37:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:24.881 [2024-07-15 18:37:10.351096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:24.881 [2024-07-15 18:37:10.351138] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:24.881 [2024-07-15 18:37:10.351158] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2677ef0 00:22:24.881 [2024-07-15 18:37:10.351167] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:24.881 [2024-07-15 18:37:10.352714] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:24.881 [2024-07-15 18:37:10.352741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:24.881 BaseBdev3 00:22:24.881 18:37:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:24.881 18:37:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:25.138 BaseBdev4_malloc 00:22:25.138 18:37:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:22:25.396 true 00:22:25.396 18:37:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:22:25.686 [2024-07-15 18:37:11.149598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:22:25.686 [2024-07-15 18:37:11.149638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:25.686 [2024-07-15 18:37:11.149659] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x267c280 00:22:25.686 [2024-07-15 18:37:11.149669] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.686 [2024-07-15 18:37:11.151317] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.686 [2024-07-15 18:37:11.151344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:25.686 BaseBdev4 00:22:25.686 18:37:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:22:25.949 [2024-07-15 18:37:11.402301] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:25.949 [2024-07-15 18:37:11.403623] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:25.949 [2024-07-15 18:37:11.403692] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:25.949 [2024-07-15 18:37:11.403752] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:25.949 [2024-07-15 18:37:11.404001] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x267dd90 00:22:25.949 [2024-07-15 18:37:11.404012] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:25.949 [2024-07-15 18:37:11.404209] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x267daa0 00:22:25.949 [2024-07-15 18:37:11.404372] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x267dd90 00:22:25.949 [2024-07-15 18:37:11.404381] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x267dd90 00:22:25.949 [2024-07-15 18:37:11.404488] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:25.949 18:37:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:25.949 18:37:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:25.949 18:37:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:25.949 18:37:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:25.949 18:37:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:25.949 18:37:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:25.949 18:37:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:25.949 18:37:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:25.949 18:37:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:25.949 18:37:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:25.949 18:37:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.949 18:37:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.208 18:37:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:26.208 "name": "raid_bdev1", 00:22:26.208 "uuid": "9883149a-377d-43d3-9549-d056c5446539", 00:22:26.208 "strip_size_kb": 0, 00:22:26.208 "state": "online", 00:22:26.208 "raid_level": "raid1", 00:22:26.208 "superblock": true, 00:22:26.208 "num_base_bdevs": 4, 00:22:26.208 "num_base_bdevs_discovered": 4, 00:22:26.208 "num_base_bdevs_operational": 4, 00:22:26.208 "base_bdevs_list": [ 00:22:26.208 { 00:22:26.208 "name": "BaseBdev1", 00:22:26.208 "uuid": "9eead594-8e00-5706-8408-a0e7912d4fbc", 00:22:26.208 "is_configured": true, 00:22:26.208 "data_offset": 2048, 00:22:26.208 "data_size": 63488 00:22:26.208 }, 00:22:26.208 { 00:22:26.208 "name": "BaseBdev2", 00:22:26.208 "uuid": "efbb46ce-8c43-58ab-9b3b-786d012830f0", 00:22:26.208 "is_configured": true, 00:22:26.208 "data_offset": 2048, 00:22:26.208 "data_size": 63488 00:22:26.208 }, 00:22:26.208 { 00:22:26.208 "name": "BaseBdev3", 00:22:26.208 "uuid": "46195b31-f533-50f8-96e0-cf46974f2b07", 00:22:26.208 "is_configured": true, 00:22:26.208 "data_offset": 2048, 00:22:26.208 "data_size": 63488 00:22:26.208 }, 00:22:26.208 { 00:22:26.208 "name": "BaseBdev4", 00:22:26.208 "uuid": "caeec7d8-50b5-5c9a-8d1b-b548df406ee4", 00:22:26.208 "is_configured": true, 00:22:26.208 "data_offset": 2048, 00:22:26.208 "data_size": 63488 00:22:26.208 } 00:22:26.208 ] 00:22:26.208 }' 00:22:26.208 18:37:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:26.208 18:37:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.774 18:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:22:26.774 18:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:27.033 [2024-07-15 18:37:12.425451] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24cf430 00:22:27.966 18:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:22:27.966 18:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:22:27.966 18:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:22:27.966 18:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:22:27.966 18:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:22:27.966 18:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:27.966 18:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:27.966 18:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:27.966 18:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:27.966 18:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:27.966 18:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:27.966 18:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:27.966 18:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:27.966 18:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:27.966 18:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:27.966 18:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.966 18:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.223 18:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:28.223 "name": "raid_bdev1", 00:22:28.223 "uuid": "9883149a-377d-43d3-9549-d056c5446539", 00:22:28.223 "strip_size_kb": 0, 00:22:28.223 "state": "online", 00:22:28.223 "raid_level": "raid1", 00:22:28.223 "superblock": true, 00:22:28.223 "num_base_bdevs": 4, 00:22:28.223 "num_base_bdevs_discovered": 4, 00:22:28.223 "num_base_bdevs_operational": 4, 00:22:28.223 "base_bdevs_list": [ 00:22:28.223 { 00:22:28.223 "name": "BaseBdev1", 00:22:28.223 "uuid": "9eead594-8e00-5706-8408-a0e7912d4fbc", 00:22:28.223 "is_configured": true, 00:22:28.223 "data_offset": 2048, 00:22:28.223 "data_size": 63488 00:22:28.223 }, 00:22:28.223 { 00:22:28.223 "name": "BaseBdev2", 00:22:28.223 "uuid": "efbb46ce-8c43-58ab-9b3b-786d012830f0", 00:22:28.223 "is_configured": true, 00:22:28.223 "data_offset": 2048, 00:22:28.223 "data_size": 63488 00:22:28.223 }, 00:22:28.224 { 00:22:28.224 "name": "BaseBdev3", 00:22:28.224 "uuid": "46195b31-f533-50f8-96e0-cf46974f2b07", 00:22:28.224 "is_configured": true, 00:22:28.224 "data_offset": 2048, 00:22:28.224 "data_size": 63488 00:22:28.224 }, 00:22:28.224 { 00:22:28.224 "name": "BaseBdev4", 00:22:28.224 "uuid": "caeec7d8-50b5-5c9a-8d1b-b548df406ee4", 00:22:28.224 "is_configured": true, 00:22:28.224 "data_offset": 2048, 00:22:28.224 "data_size": 63488 00:22:28.224 } 00:22:28.224 ] 00:22:28.224 }' 00:22:28.224 18:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:28.224 18:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.790 18:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:29.047 [2024-07-15 18:37:14.466720] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:29.047 [2024-07-15 18:37:14.466760] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:29.047 [2024-07-15 18:37:14.470109] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:29.047 [2024-07-15 18:37:14.470147] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:29.047 [2024-07-15 18:37:14.470275] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:29.048 [2024-07-15 18:37:14.470285] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x267dd90 name raid_bdev1, state offline 00:22:29.048 0 00:22:29.048 18:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2885947 00:22:29.048 18:37:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2885947 ']' 00:22:29.048 18:37:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2885947 00:22:29.048 18:37:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:22:29.048 18:37:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:29.048 18:37:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2885947 00:22:29.048 18:37:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:29.048 18:37:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:29.048 18:37:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2885947' 00:22:29.048 killing process with pid 2885947 00:22:29.048 18:37:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2885947 00:22:29.048 [2024-07-15 18:37:14.545728] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:29.048 18:37:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2885947 00:22:29.048 [2024-07-15 18:37:14.575931] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:29.305 18:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.qXThExkbOy 00:22:29.305 18:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:22:29.305 18:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:22:29.305 18:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:22:29.305 18:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:22:29.305 18:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:29.305 18:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:22:29.305 18:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:22:29.305 00:22:29.305 real 0m7.771s 00:22:29.305 user 0m12.755s 00:22:29.305 sys 0m1.102s 00:22:29.305 18:37:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:29.305 18:37:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.305 ************************************ 00:22:29.305 END TEST raid_read_error_test 00:22:29.305 ************************************ 00:22:29.305 18:37:14 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:29.305 18:37:14 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:22:29.305 18:37:14 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:29.305 18:37:14 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:29.305 18:37:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:29.305 ************************************ 00:22:29.305 START TEST raid_write_error_test 00:22:29.306 ************************************ 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 write 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.AwfCpCuX1a 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2887117 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2887117 /var/tmp/spdk-raid.sock 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2887117 ']' 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:29.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:29.306 18:37:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.564 [2024-07-15 18:37:14.898835] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:22:29.564 [2024-07-15 18:37:14.898895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2887117 ] 00:22:29.564 [2024-07-15 18:37:14.999359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.564 [2024-07-15 18:37:15.095825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.823 [2024-07-15 18:37:15.160242] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:29.823 [2024-07-15 18:37:15.160274] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:30.759 18:37:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:30.759 18:37:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:22:30.759 18:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:30.759 18:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:31.018 BaseBdev1_malloc 00:22:31.276 18:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:22:31.533 true 00:22:31.791 18:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:32.049 [2024-07-15 18:37:17.552792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:32.049 [2024-07-15 18:37:17.552835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:32.049 [2024-07-15 18:37:17.552852] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x285ad20 00:22:32.049 [2024-07-15 18:37:17.552862] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:32.049 [2024-07-15 18:37:17.554656] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:32.049 [2024-07-15 18:37:17.554685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:32.049 BaseBdev1 00:22:32.049 18:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:32.049 18:37:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:32.615 BaseBdev2_malloc 00:22:32.615 18:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:22:33.184 true 00:22:33.184 18:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:33.751 [2024-07-15 18:37:19.041195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:33.751 [2024-07-15 18:37:19.041237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:33.751 [2024-07-15 18:37:19.041255] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x285fd50 00:22:33.751 [2024-07-15 18:37:19.041265] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:33.751 [2024-07-15 18:37:19.042890] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:33.751 [2024-07-15 18:37:19.042917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:33.751 BaseBdev2 00:22:33.751 18:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:33.751 18:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:34.009 BaseBdev3_malloc 00:22:34.267 18:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:22:34.267 true 00:22:34.525 18:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:34.782 [2024-07-15 18:37:20.296992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:34.782 [2024-07-15 18:37:20.297036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:34.782 [2024-07-15 18:37:20.297054] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x285eef0 00:22:34.782 [2024-07-15 18:37:20.297064] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:34.782 [2024-07-15 18:37:20.298725] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:34.782 [2024-07-15 18:37:20.298751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:34.782 BaseBdev3 00:22:34.782 18:37:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:34.782 18:37:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:35.041 BaseBdev4_malloc 00:22:35.041 18:37:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:22:35.609 true 00:22:35.609 18:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:22:36.176 [2024-07-15 18:37:21.476449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:22:36.176 [2024-07-15 18:37:21.476492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:36.176 [2024-07-15 18:37:21.476511] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2863280 00:22:36.176 [2024-07-15 18:37:21.476521] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:36.176 [2024-07-15 18:37:21.478149] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:36.176 [2024-07-15 18:37:21.478177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:36.176 BaseBdev4 00:22:36.176 18:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:22:36.435 [2024-07-15 18:37:21.973782] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:36.435 [2024-07-15 18:37:21.975173] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:36.435 [2024-07-15 18:37:21.975243] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:36.435 [2024-07-15 18:37:21.975303] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:36.435 [2024-07-15 18:37:21.975546] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2864d90 00:22:36.435 [2024-07-15 18:37:21.975557] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:36.435 [2024-07-15 18:37:21.975753] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2864aa0 00:22:36.435 [2024-07-15 18:37:21.975919] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2864d90 00:22:36.435 [2024-07-15 18:37:21.975928] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2864d90 00:22:36.435 [2024-07-15 18:37:21.976044] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:36.694 18:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:36.694 18:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:36.694 18:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:36.694 18:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:36.694 18:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:36.694 18:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:36.694 18:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:36.694 18:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:36.694 18:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:36.694 18:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:36.694 18:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.694 18:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:36.952 18:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:36.952 "name": "raid_bdev1", 00:22:36.952 "uuid": "0c532bad-837c-4696-9d53-4c2f5ab2d2b3", 00:22:36.952 "strip_size_kb": 0, 00:22:36.952 "state": "online", 00:22:36.952 "raid_level": "raid1", 00:22:36.952 "superblock": true, 00:22:36.952 "num_base_bdevs": 4, 00:22:36.952 "num_base_bdevs_discovered": 4, 00:22:36.952 "num_base_bdevs_operational": 4, 00:22:36.952 "base_bdevs_list": [ 00:22:36.952 { 00:22:36.952 "name": "BaseBdev1", 00:22:36.952 "uuid": "e72eee85-ecab-51a1-b89d-242de0177f0d", 00:22:36.952 "is_configured": true, 00:22:36.952 "data_offset": 2048, 00:22:36.952 "data_size": 63488 00:22:36.952 }, 00:22:36.952 { 00:22:36.952 "name": "BaseBdev2", 00:22:36.952 "uuid": "fabd97e3-f29d-5e71-b12d-d3b6a01037f5", 00:22:36.952 "is_configured": true, 00:22:36.952 "data_offset": 2048, 00:22:36.952 "data_size": 63488 00:22:36.952 }, 00:22:36.952 { 00:22:36.952 "name": "BaseBdev3", 00:22:36.952 "uuid": "de1003ac-78fa-5b18-a424-d9df661b6c4c", 00:22:36.952 "is_configured": true, 00:22:36.952 "data_offset": 2048, 00:22:36.952 "data_size": 63488 00:22:36.952 }, 00:22:36.952 { 00:22:36.952 "name": "BaseBdev4", 00:22:36.952 "uuid": "3191d906-eca9-5667-8818-a7050f31602a", 00:22:36.952 "is_configured": true, 00:22:36.952 "data_offset": 2048, 00:22:36.952 "data_size": 63488 00:22:36.952 } 00:22:36.952 ] 00:22:36.952 }' 00:22:36.952 18:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:36.952 18:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.889 18:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:22:37.889 18:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:37.889 [2024-07-15 18:37:23.229613] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26b6430 00:22:38.828 18:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:22:38.828 [2024-07-15 18:37:24.355101] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:22:38.828 [2024-07-15 18:37:24.355154] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:38.828 [2024-07-15 18:37:24.355371] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x26b6430 00:22:38.828 18:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:22:38.828 18:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:22:39.087 18:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:22:39.087 18:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=3 00:22:39.087 18:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:39.087 18:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:39.087 18:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:39.087 18:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:39.087 18:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:39.087 18:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:39.087 18:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:39.087 18:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:39.087 18:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:39.087 18:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:39.087 18:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.087 18:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.346 18:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:39.346 "name": "raid_bdev1", 00:22:39.346 "uuid": "0c532bad-837c-4696-9d53-4c2f5ab2d2b3", 00:22:39.346 "strip_size_kb": 0, 00:22:39.346 "state": "online", 00:22:39.346 "raid_level": "raid1", 00:22:39.346 "superblock": true, 00:22:39.346 "num_base_bdevs": 4, 00:22:39.346 "num_base_bdevs_discovered": 3, 00:22:39.346 "num_base_bdevs_operational": 3, 00:22:39.346 "base_bdevs_list": [ 00:22:39.346 { 00:22:39.346 "name": null, 00:22:39.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.346 "is_configured": false, 00:22:39.346 "data_offset": 2048, 00:22:39.346 "data_size": 63488 00:22:39.346 }, 00:22:39.346 { 00:22:39.346 "name": "BaseBdev2", 00:22:39.346 "uuid": "fabd97e3-f29d-5e71-b12d-d3b6a01037f5", 00:22:39.346 "is_configured": true, 00:22:39.346 "data_offset": 2048, 00:22:39.346 "data_size": 63488 00:22:39.346 }, 00:22:39.346 { 00:22:39.346 "name": "BaseBdev3", 00:22:39.346 "uuid": "de1003ac-78fa-5b18-a424-d9df661b6c4c", 00:22:39.346 "is_configured": true, 00:22:39.346 "data_offset": 2048, 00:22:39.346 "data_size": 63488 00:22:39.346 }, 00:22:39.346 { 00:22:39.346 "name": "BaseBdev4", 00:22:39.346 "uuid": "3191d906-eca9-5667-8818-a7050f31602a", 00:22:39.346 "is_configured": true, 00:22:39.346 "data_offset": 2048, 00:22:39.346 "data_size": 63488 00:22:39.346 } 00:22:39.346 ] 00:22:39.346 }' 00:22:39.346 18:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:39.346 18:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.912 18:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:40.170 [2024-07-15 18:37:25.507578] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:40.170 [2024-07-15 18:37:25.507619] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:40.170 [2024-07-15 18:37:25.510970] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:40.170 [2024-07-15 18:37:25.511004] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:40.170 [2024-07-15 18:37:25.511106] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:40.170 [2024-07-15 18:37:25.511115] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2864d90 name raid_bdev1, state offline 00:22:40.170 0 00:22:40.170 18:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2887117 00:22:40.170 18:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2887117 ']' 00:22:40.170 18:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2887117 00:22:40.170 18:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:22:40.170 18:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:40.170 18:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2887117 00:22:40.170 18:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:40.170 18:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:40.170 18:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2887117' 00:22:40.170 killing process with pid 2887117 00:22:40.170 18:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2887117 00:22:40.170 [2024-07-15 18:37:25.586186] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:40.170 18:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2887117 00:22:40.170 [2024-07-15 18:37:25.616489] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:40.438 18:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.AwfCpCuX1a 00:22:40.438 18:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:22:40.438 18:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:22:40.438 18:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:22:40.438 18:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:22:40.438 18:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:40.438 18:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:22:40.438 18:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:22:40.438 00:22:40.438 real 0m10.995s 00:22:40.438 user 0m18.927s 00:22:40.438 sys 0m1.356s 00:22:40.438 18:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:40.438 18:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.438 ************************************ 00:22:40.438 END TEST raid_write_error_test 00:22:40.438 ************************************ 00:22:40.438 18:37:25 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:40.438 18:37:25 bdev_raid -- bdev/bdev_raid.sh@875 -- # '[' true = true ']' 00:22:40.438 18:37:25 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:22:40.438 18:37:25 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:22:40.438 18:37:25 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:22:40.438 18:37:25 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:40.438 18:37:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:40.438 ************************************ 00:22:40.438 START TEST raid_rebuild_test 00:22:40.438 ************************************ 00:22:40.438 18:37:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 false false true 00:22:40.438 18:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:22:40.438 18:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:22:40.438 18:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:22:40.438 18:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:22:40.438 18:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:22:40.438 18:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:22:40.438 18:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:40.438 18:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:22:40.438 18:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:22:40.438 18:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:40.439 18:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:22:40.439 18:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:22:40.439 18:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:40.439 18:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:40.439 18:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:22:40.439 18:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:22:40.439 18:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:22:40.439 18:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:22:40.439 18:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:22:40.439 18:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:22:40.439 18:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:22:40.439 18:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:22:40.439 18:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:22:40.439 18:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=2888902 00:22:40.439 18:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 2888902 /var/tmp/spdk-raid.sock 00:22:40.439 18:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:40.439 18:37:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 2888902 ']' 00:22:40.439 18:37:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:40.439 18:37:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:40.439 18:37:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:40.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:40.439 18:37:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:40.439 18:37:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.439 [2024-07-15 18:37:25.936741] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:22:40.439 [2024-07-15 18:37:25.936804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888902 ] 00:22:40.439 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:40.439 Zero copy mechanism will not be used. 00:22:40.748 [2024-07-15 18:37:26.034863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.749 [2024-07-15 18:37:26.129188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.749 [2024-07-15 18:37:26.192495] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:40.749 [2024-07-15 18:37:26.192527] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:41.720 18:37:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:41.720 18:37:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:22:41.720 18:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:22:41.720 18:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:41.978 BaseBdev1_malloc 00:22:41.978 18:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:42.236 [2024-07-15 18:37:27.634819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:42.236 [2024-07-15 18:37:27.634863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:42.236 [2024-07-15 18:37:27.634882] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2723130 00:22:42.236 [2024-07-15 18:37:27.634891] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:42.236 [2024-07-15 18:37:27.636609] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:42.236 [2024-07-15 18:37:27.636638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:42.236 BaseBdev1 00:22:42.236 18:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:22:42.236 18:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:42.801 BaseBdev2_malloc 00:22:42.801 18:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:43.058 [2024-07-15 18:37:28.389517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:43.058 [2024-07-15 18:37:28.389562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:43.058 [2024-07-15 18:37:28.389579] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x28c8fa0 00:22:43.058 [2024-07-15 18:37:28.389594] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:43.058 [2024-07-15 18:37:28.391185] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:43.058 [2024-07-15 18:37:28.391212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:43.058 BaseBdev2 00:22:43.058 18:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:43.624 spare_malloc 00:22:43.624 18:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:43.882 spare_delay 00:22:43.882 18:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:44.140 [2024-07-15 18:37:29.641330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:44.140 [2024-07-15 18:37:29.641371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:44.140 [2024-07-15 18:37:29.641387] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x28caf40 00:22:44.140 [2024-07-15 18:37:29.641397] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:44.140 [2024-07-15 18:37:29.643037] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:44.140 [2024-07-15 18:37:29.643065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:44.140 spare 00:22:44.140 18:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:22:44.704 [2024-07-15 18:37:30.126636] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:44.704 [2024-07-15 18:37:30.128039] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:44.704 [2024-07-15 18:37:30.128122] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x28cc370 00:22:44.704 [2024-07-15 18:37:30.128132] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:44.704 [2024-07-15 18:37:30.128344] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x28cb1d0 00:22:44.704 [2024-07-15 18:37:30.128496] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x28cc370 00:22:44.704 [2024-07-15 18:37:30.128504] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x28cc370 00:22:44.704 [2024-07-15 18:37:30.128624] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:44.704 18:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:44.704 18:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:44.704 18:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:44.704 18:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:44.704 18:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:44.704 18:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:44.704 18:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:44.704 18:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:44.704 18:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:44.704 18:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:44.704 18:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.704 18:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.961 18:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:44.961 "name": "raid_bdev1", 00:22:44.961 "uuid": "5c11ab84-9234-44fa-bcf4-ac2fed23c884", 00:22:44.961 "strip_size_kb": 0, 00:22:44.961 "state": "online", 00:22:44.961 "raid_level": "raid1", 00:22:44.961 "superblock": false, 00:22:44.961 "num_base_bdevs": 2, 00:22:44.961 "num_base_bdevs_discovered": 2, 00:22:44.961 "num_base_bdevs_operational": 2, 00:22:44.961 "base_bdevs_list": [ 00:22:44.961 { 00:22:44.961 "name": "BaseBdev1", 00:22:44.961 "uuid": "b38c79cb-dd39-562a-9d39-7323ddbaa0c1", 00:22:44.961 "is_configured": true, 00:22:44.961 "data_offset": 0, 00:22:44.961 "data_size": 65536 00:22:44.961 }, 00:22:44.961 { 00:22:44.961 "name": "BaseBdev2", 00:22:44.961 "uuid": "a0caea00-e050-5f57-bf8f-709523eebe72", 00:22:44.961 "is_configured": true, 00:22:44.961 "data_offset": 0, 00:22:44.961 "data_size": 65536 00:22:44.961 } 00:22:44.961 ] 00:22:44.961 }' 00:22:44.961 18:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:44.961 18:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.526 18:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:45.526 18:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:22:45.784 [2024-07-15 18:37:31.282007] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:45.784 18:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:22:45.784 18:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:45.784 18:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.043 18:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:22:46.043 18:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:22:46.043 18:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:22:46.043 18:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:22:46.043 18:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:46.043 18:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:46.043 18:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:46.043 18:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:46.043 18:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:46.043 18:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:46.043 18:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:22:46.043 18:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:46.043 18:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:46.043 18:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:46.336 [2024-07-15 18:37:31.795165] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x28d4cf0 00:22:46.336 /dev/nbd0 00:22:46.336 18:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:46.336 18:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:46.336 18:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:46.336 18:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:22:46.336 18:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:46.336 18:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:46.336 18:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:46.336 18:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:22:46.336 18:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:46.336 18:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:46.336 18:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:46.336 1+0 records in 00:22:46.336 1+0 records out 00:22:46.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205839 s, 19.9 MB/s 00:22:46.336 18:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:46.336 18:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:22:46.336 18:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:46.336 18:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:46.336 18:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:22:46.336 18:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:46.336 18:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:46.336 18:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:22:46.336 18:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:22:46.336 18:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:22:52.895 65536+0 records in 00:22:52.895 65536+0 records out 00:22:52.895 33554432 bytes (34 MB, 32 MiB) copied, 5.30539 s, 6.3 MB/s 00:22:52.895 18:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:52.895 18:37:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:52.895 18:37:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:52.895 18:37:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:52.895 18:37:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:22:52.895 18:37:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:52.895 18:37:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:52.895 18:37:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:52.895 18:37:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:52.895 18:37:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:52.895 18:37:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:52.895 18:37:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:52.895 18:37:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:52.895 [2024-07-15 18:37:37.352834] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:52.895 18:37:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:52.895 18:37:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:52.895 18:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:52.895 [2024-07-15 18:37:37.589499] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:52.895 18:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:52.895 18:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:52.895 18:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:52.895 18:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:52.895 18:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:52.895 18:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:52.895 18:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:52.895 18:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:52.895 18:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:52.895 18:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:52.895 18:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.895 18:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.895 18:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:52.895 "name": "raid_bdev1", 00:22:52.895 "uuid": "5c11ab84-9234-44fa-bcf4-ac2fed23c884", 00:22:52.895 "strip_size_kb": 0, 00:22:52.895 "state": "online", 00:22:52.895 "raid_level": "raid1", 00:22:52.895 "superblock": false, 00:22:52.895 "num_base_bdevs": 2, 00:22:52.895 "num_base_bdevs_discovered": 1, 00:22:52.895 "num_base_bdevs_operational": 1, 00:22:52.895 "base_bdevs_list": [ 00:22:52.895 { 00:22:52.895 "name": null, 00:22:52.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.895 "is_configured": false, 00:22:52.895 "data_offset": 0, 00:22:52.895 "data_size": 65536 00:22:52.895 }, 00:22:52.895 { 00:22:52.895 "name": "BaseBdev2", 00:22:52.895 "uuid": "a0caea00-e050-5f57-bf8f-709523eebe72", 00:22:52.895 "is_configured": true, 00:22:52.895 "data_offset": 0, 00:22:52.895 "data_size": 65536 00:22:52.896 } 00:22:52.896 ] 00:22:52.896 }' 00:22:52.896 18:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:52.896 18:37:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:52.896 18:37:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:53.154 [2024-07-15 18:37:38.572156] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:53.154 [2024-07-15 18:37:38.576973] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x28cb1d0 00:22:53.154 [2024-07-15 18:37:38.579032] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:53.154 18:37:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:22:54.088 18:37:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:54.088 18:37:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:54.088 18:37:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:54.088 18:37:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:54.088 18:37:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:54.088 18:37:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.088 18:37:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.347 18:37:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:54.347 "name": "raid_bdev1", 00:22:54.347 "uuid": "5c11ab84-9234-44fa-bcf4-ac2fed23c884", 00:22:54.347 "strip_size_kb": 0, 00:22:54.347 "state": "online", 00:22:54.347 "raid_level": "raid1", 00:22:54.347 "superblock": false, 00:22:54.347 "num_base_bdevs": 2, 00:22:54.347 "num_base_bdevs_discovered": 2, 00:22:54.347 "num_base_bdevs_operational": 2, 00:22:54.347 "process": { 00:22:54.347 "type": "rebuild", 00:22:54.347 "target": "spare", 00:22:54.347 "progress": { 00:22:54.347 "blocks": 22528, 00:22:54.347 "percent": 34 00:22:54.347 } 00:22:54.347 }, 00:22:54.347 "base_bdevs_list": [ 00:22:54.347 { 00:22:54.347 "name": "spare", 00:22:54.347 "uuid": "8477a72d-4192-5a47-9c59-c350ea82a939", 00:22:54.347 "is_configured": true, 00:22:54.347 "data_offset": 0, 00:22:54.347 "data_size": 65536 00:22:54.347 }, 00:22:54.347 { 00:22:54.347 "name": "BaseBdev2", 00:22:54.347 "uuid": "a0caea00-e050-5f57-bf8f-709523eebe72", 00:22:54.347 "is_configured": true, 00:22:54.347 "data_offset": 0, 00:22:54.347 "data_size": 65536 00:22:54.347 } 00:22:54.347 ] 00:22:54.347 }' 00:22:54.347 18:37:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:54.347 18:37:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:54.347 18:37:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:54.347 18:37:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:54.347 18:37:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:54.605 [2024-07-15 18:37:40.110980] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:54.864 [2024-07-15 18:37:40.191100] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:54.864 [2024-07-15 18:37:40.191145] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:54.864 [2024-07-15 18:37:40.191159] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:54.864 [2024-07-15 18:37:40.191166] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:54.864 18:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:54.864 18:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:54.864 18:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:54.864 18:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:54.864 18:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:54.864 18:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:54.864 18:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:54.864 18:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:54.864 18:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:54.864 18:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:54.864 18:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.864 18:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.123 18:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:55.123 "name": "raid_bdev1", 00:22:55.123 "uuid": "5c11ab84-9234-44fa-bcf4-ac2fed23c884", 00:22:55.123 "strip_size_kb": 0, 00:22:55.123 "state": "online", 00:22:55.123 "raid_level": "raid1", 00:22:55.123 "superblock": false, 00:22:55.123 "num_base_bdevs": 2, 00:22:55.123 "num_base_bdevs_discovered": 1, 00:22:55.123 "num_base_bdevs_operational": 1, 00:22:55.123 "base_bdevs_list": [ 00:22:55.123 { 00:22:55.123 "name": null, 00:22:55.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.123 "is_configured": false, 00:22:55.123 "data_offset": 0, 00:22:55.123 "data_size": 65536 00:22:55.123 }, 00:22:55.123 { 00:22:55.123 "name": "BaseBdev2", 00:22:55.123 "uuid": "a0caea00-e050-5f57-bf8f-709523eebe72", 00:22:55.123 "is_configured": true, 00:22:55.123 "data_offset": 0, 00:22:55.123 "data_size": 65536 00:22:55.123 } 00:22:55.123 ] 00:22:55.123 }' 00:22:55.123 18:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:55.123 18:37:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.690 18:37:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:55.690 18:37:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:55.690 18:37:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:55.690 18:37:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:55.690 18:37:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:55.690 18:37:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.690 18:37:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.948 18:37:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:55.948 "name": "raid_bdev1", 00:22:55.948 "uuid": "5c11ab84-9234-44fa-bcf4-ac2fed23c884", 00:22:55.948 "strip_size_kb": 0, 00:22:55.948 "state": "online", 00:22:55.948 "raid_level": "raid1", 00:22:55.948 "superblock": false, 00:22:55.948 "num_base_bdevs": 2, 00:22:55.948 "num_base_bdevs_discovered": 1, 00:22:55.948 "num_base_bdevs_operational": 1, 00:22:55.948 "base_bdevs_list": [ 00:22:55.948 { 00:22:55.948 "name": null, 00:22:55.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.948 "is_configured": false, 00:22:55.948 "data_offset": 0, 00:22:55.948 "data_size": 65536 00:22:55.948 }, 00:22:55.948 { 00:22:55.948 "name": "BaseBdev2", 00:22:55.948 "uuid": "a0caea00-e050-5f57-bf8f-709523eebe72", 00:22:55.948 "is_configured": true, 00:22:55.948 "data_offset": 0, 00:22:55.948 "data_size": 65536 00:22:55.948 } 00:22:55.948 ] 00:22:55.948 }' 00:22:55.948 18:37:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:55.948 18:37:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:55.948 18:37:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:55.948 18:37:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:55.948 18:37:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:56.206 [2024-07-15 18:37:41.691634] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:56.206 [2024-07-15 18:37:41.696399] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x28cb1d0 00:22:56.206 [2024-07-15 18:37:41.697907] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:56.206 18:37:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:57.580 18:37:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:57.580 18:37:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:57.580 18:37:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:57.580 18:37:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:57.580 18:37:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:57.580 18:37:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.580 18:37:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.580 18:37:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:57.580 "name": "raid_bdev1", 00:22:57.580 "uuid": "5c11ab84-9234-44fa-bcf4-ac2fed23c884", 00:22:57.580 "strip_size_kb": 0, 00:22:57.580 "state": "online", 00:22:57.580 "raid_level": "raid1", 00:22:57.580 "superblock": false, 00:22:57.580 "num_base_bdevs": 2, 00:22:57.580 "num_base_bdevs_discovered": 2, 00:22:57.580 "num_base_bdevs_operational": 2, 00:22:57.580 "process": { 00:22:57.580 "type": "rebuild", 00:22:57.580 "target": "spare", 00:22:57.580 "progress": { 00:22:57.580 "blocks": 24576, 00:22:57.580 "percent": 37 00:22:57.580 } 00:22:57.580 }, 00:22:57.580 "base_bdevs_list": [ 00:22:57.580 { 00:22:57.580 "name": "spare", 00:22:57.580 "uuid": "8477a72d-4192-5a47-9c59-c350ea82a939", 00:22:57.580 "is_configured": true, 00:22:57.580 "data_offset": 0, 00:22:57.580 "data_size": 65536 00:22:57.580 }, 00:22:57.580 { 00:22:57.580 "name": "BaseBdev2", 00:22:57.580 "uuid": "a0caea00-e050-5f57-bf8f-709523eebe72", 00:22:57.580 "is_configured": true, 00:22:57.580 "data_offset": 0, 00:22:57.580 "data_size": 65536 00:22:57.580 } 00:22:57.580 ] 00:22:57.580 }' 00:22:57.580 18:37:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:57.580 18:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:57.580 18:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:57.580 18:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:57.580 18:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:22:57.580 18:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:22:57.580 18:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:22:57.580 18:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:22:57.580 18:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=822 00:22:57.580 18:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:57.580 18:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:57.580 18:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:57.580 18:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:57.580 18:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:57.580 18:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:57.580 18:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.580 18:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.839 18:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:57.839 "name": "raid_bdev1", 00:22:57.839 "uuid": "5c11ab84-9234-44fa-bcf4-ac2fed23c884", 00:22:57.839 "strip_size_kb": 0, 00:22:57.839 "state": "online", 00:22:57.839 "raid_level": "raid1", 00:22:57.839 "superblock": false, 00:22:57.839 "num_base_bdevs": 2, 00:22:57.839 "num_base_bdevs_discovered": 2, 00:22:57.839 "num_base_bdevs_operational": 2, 00:22:57.839 "process": { 00:22:57.839 "type": "rebuild", 00:22:57.839 "target": "spare", 00:22:57.839 "progress": { 00:22:57.839 "blocks": 30720, 00:22:57.839 "percent": 46 00:22:57.839 } 00:22:57.839 }, 00:22:57.839 "base_bdevs_list": [ 00:22:57.839 { 00:22:57.839 "name": "spare", 00:22:57.839 "uuid": "8477a72d-4192-5a47-9c59-c350ea82a939", 00:22:57.839 "is_configured": true, 00:22:57.839 "data_offset": 0, 00:22:57.839 "data_size": 65536 00:22:57.839 }, 00:22:57.839 { 00:22:57.839 "name": "BaseBdev2", 00:22:57.839 "uuid": "a0caea00-e050-5f57-bf8f-709523eebe72", 00:22:57.839 "is_configured": true, 00:22:57.839 "data_offset": 0, 00:22:57.839 "data_size": 65536 00:22:57.839 } 00:22:57.839 ] 00:22:57.839 }' 00:22:57.839 18:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:57.839 18:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:57.839 18:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:58.098 18:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:58.098 18:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:59.031 18:37:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:59.032 18:37:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:59.032 18:37:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:59.032 18:37:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:59.032 18:37:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:59.032 18:37:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:59.032 18:37:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.032 18:37:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.292 18:37:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:59.292 "name": "raid_bdev1", 00:22:59.292 "uuid": "5c11ab84-9234-44fa-bcf4-ac2fed23c884", 00:22:59.292 "strip_size_kb": 0, 00:22:59.292 "state": "online", 00:22:59.292 "raid_level": "raid1", 00:22:59.292 "superblock": false, 00:22:59.292 "num_base_bdevs": 2, 00:22:59.292 "num_base_bdevs_discovered": 2, 00:22:59.292 "num_base_bdevs_operational": 2, 00:22:59.292 "process": { 00:22:59.292 "type": "rebuild", 00:22:59.292 "target": "spare", 00:22:59.292 "progress": { 00:22:59.292 "blocks": 59392, 00:22:59.292 "percent": 90 00:22:59.292 } 00:22:59.292 }, 00:22:59.292 "base_bdevs_list": [ 00:22:59.292 { 00:22:59.292 "name": "spare", 00:22:59.292 "uuid": "8477a72d-4192-5a47-9c59-c350ea82a939", 00:22:59.292 "is_configured": true, 00:22:59.292 "data_offset": 0, 00:22:59.292 "data_size": 65536 00:22:59.292 }, 00:22:59.292 { 00:22:59.292 "name": "BaseBdev2", 00:22:59.292 "uuid": "a0caea00-e050-5f57-bf8f-709523eebe72", 00:22:59.292 "is_configured": true, 00:22:59.292 "data_offset": 0, 00:22:59.292 "data_size": 65536 00:22:59.292 } 00:22:59.292 ] 00:22:59.292 }' 00:22:59.292 18:37:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:59.292 18:37:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:59.292 18:37:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:59.292 18:37:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:59.292 18:37:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:59.607 [2024-07-15 18:37:44.921736] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:59.607 [2024-07-15 18:37:44.921792] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:59.607 [2024-07-15 18:37:44.921829] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:00.540 18:37:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:00.540 18:37:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:00.540 18:37:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:00.540 18:37:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:00.540 18:37:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:00.540 18:37:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:00.540 18:37:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.540 18:37:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.540 18:37:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:00.540 "name": "raid_bdev1", 00:23:00.540 "uuid": "5c11ab84-9234-44fa-bcf4-ac2fed23c884", 00:23:00.540 "strip_size_kb": 0, 00:23:00.540 "state": "online", 00:23:00.540 "raid_level": "raid1", 00:23:00.540 "superblock": false, 00:23:00.540 "num_base_bdevs": 2, 00:23:00.540 "num_base_bdevs_discovered": 2, 00:23:00.540 "num_base_bdevs_operational": 2, 00:23:00.540 "base_bdevs_list": [ 00:23:00.540 { 00:23:00.540 "name": "spare", 00:23:00.540 "uuid": "8477a72d-4192-5a47-9c59-c350ea82a939", 00:23:00.540 "is_configured": true, 00:23:00.540 "data_offset": 0, 00:23:00.540 "data_size": 65536 00:23:00.540 }, 00:23:00.540 { 00:23:00.540 "name": "BaseBdev2", 00:23:00.540 "uuid": "a0caea00-e050-5f57-bf8f-709523eebe72", 00:23:00.540 "is_configured": true, 00:23:00.540 "data_offset": 0, 00:23:00.540 "data_size": 65536 00:23:00.540 } 00:23:00.540 ] 00:23:00.540 }' 00:23:00.540 18:37:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:00.540 18:37:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:00.540 18:37:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:00.797 18:37:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:23:00.797 18:37:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:23:00.797 18:37:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:00.797 18:37:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:00.797 18:37:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:00.797 18:37:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:00.797 18:37:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:00.798 18:37:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.798 18:37:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.055 18:37:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:01.055 "name": "raid_bdev1", 00:23:01.055 "uuid": "5c11ab84-9234-44fa-bcf4-ac2fed23c884", 00:23:01.055 "strip_size_kb": 0, 00:23:01.055 "state": "online", 00:23:01.055 "raid_level": "raid1", 00:23:01.055 "superblock": false, 00:23:01.055 "num_base_bdevs": 2, 00:23:01.055 "num_base_bdevs_discovered": 2, 00:23:01.055 "num_base_bdevs_operational": 2, 00:23:01.055 "base_bdevs_list": [ 00:23:01.055 { 00:23:01.055 "name": "spare", 00:23:01.055 "uuid": "8477a72d-4192-5a47-9c59-c350ea82a939", 00:23:01.055 "is_configured": true, 00:23:01.055 "data_offset": 0, 00:23:01.055 "data_size": 65536 00:23:01.055 }, 00:23:01.055 { 00:23:01.055 "name": "BaseBdev2", 00:23:01.055 "uuid": "a0caea00-e050-5f57-bf8f-709523eebe72", 00:23:01.055 "is_configured": true, 00:23:01.055 "data_offset": 0, 00:23:01.055 "data_size": 65536 00:23:01.055 } 00:23:01.055 ] 00:23:01.055 }' 00:23:01.055 18:37:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:01.055 18:37:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:01.055 18:37:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:01.055 18:37:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:01.055 18:37:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:01.055 18:37:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:01.055 18:37:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:01.055 18:37:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:01.055 18:37:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:01.055 18:37:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:01.055 18:37:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:01.055 18:37:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:01.055 18:37:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:01.055 18:37:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:01.055 18:37:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.055 18:37:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.313 18:37:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:01.313 "name": "raid_bdev1", 00:23:01.313 "uuid": "5c11ab84-9234-44fa-bcf4-ac2fed23c884", 00:23:01.313 "strip_size_kb": 0, 00:23:01.313 "state": "online", 00:23:01.313 "raid_level": "raid1", 00:23:01.313 "superblock": false, 00:23:01.313 "num_base_bdevs": 2, 00:23:01.313 "num_base_bdevs_discovered": 2, 00:23:01.313 "num_base_bdevs_operational": 2, 00:23:01.313 "base_bdevs_list": [ 00:23:01.313 { 00:23:01.313 "name": "spare", 00:23:01.313 "uuid": "8477a72d-4192-5a47-9c59-c350ea82a939", 00:23:01.313 "is_configured": true, 00:23:01.313 "data_offset": 0, 00:23:01.313 "data_size": 65536 00:23:01.313 }, 00:23:01.313 { 00:23:01.313 "name": "BaseBdev2", 00:23:01.313 "uuid": "a0caea00-e050-5f57-bf8f-709523eebe72", 00:23:01.313 "is_configured": true, 00:23:01.313 "data_offset": 0, 00:23:01.313 "data_size": 65536 00:23:01.313 } 00:23:01.313 ] 00:23:01.313 }' 00:23:01.313 18:37:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:01.313 18:37:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.879 18:37:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:02.445 [2024-07-15 18:37:47.822072] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:02.445 [2024-07-15 18:37:47.822098] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:02.445 [2024-07-15 18:37:47.822152] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:02.445 [2024-07-15 18:37:47.822206] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:02.445 [2024-07-15 18:37:47.822215] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x28cc370 name raid_bdev1, state offline 00:23:02.445 18:37:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.445 18:37:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:23:02.703 18:37:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:23:02.703 18:37:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:23:02.703 18:37:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:23:02.703 18:37:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:02.703 18:37:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:02.703 18:37:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:02.703 18:37:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:02.703 18:37:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:02.703 18:37:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:02.703 18:37:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:02.703 18:37:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:02.703 18:37:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:02.703 18:37:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:02.961 /dev/nbd0 00:23:02.961 18:37:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:02.961 18:37:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:02.961 18:37:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:23:02.961 18:37:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:23:02.961 18:37:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:02.961 18:37:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:02.961 18:37:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:23:02.961 18:37:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:23:02.961 18:37:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:02.961 18:37:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:02.961 18:37:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:02.961 1+0 records in 00:23:02.961 1+0 records out 00:23:02.961 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002227 s, 18.4 MB/s 00:23:02.961 18:37:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:02.961 18:37:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:23:02.961 18:37:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:02.961 18:37:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:02.961 18:37:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:23:02.961 18:37:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:02.961 18:37:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:02.961 18:37:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:03.219 /dev/nbd1 00:23:03.219 18:37:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:03.219 18:37:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:03.219 18:37:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:23:03.219 18:37:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:23:03.219 18:37:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:03.219 18:37:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:03.219 18:37:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:23:03.219 18:37:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:23:03.219 18:37:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:03.219 18:37:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:03.219 18:37:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:03.219 1+0 records in 00:23:03.219 1+0 records out 00:23:03.219 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258224 s, 15.9 MB/s 00:23:03.219 18:37:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:03.219 18:37:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:23:03.219 18:37:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:03.219 18:37:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:03.219 18:37:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:23:03.219 18:37:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:03.219 18:37:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:03.219 18:37:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:03.219 18:37:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:03.219 18:37:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:03.219 18:37:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:03.219 18:37:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:03.219 18:37:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:23:03.219 18:37:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:03.219 18:37:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:03.475 18:37:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:03.475 18:37:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:03.475 18:37:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:03.475 18:37:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:03.475 18:37:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:03.475 18:37:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:03.475 18:37:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:03.475 18:37:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:03.475 18:37:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:03.475 18:37:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:03.731 18:37:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:03.731 18:37:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:03.731 18:37:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:03.731 18:37:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:03.731 18:37:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:03.731 18:37:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:03.731 18:37:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:03.731 18:37:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:03.731 18:37:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:23:03.731 18:37:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 2888902 00:23:03.731 18:37:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 2888902 ']' 00:23:03.731 18:37:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 2888902 00:23:03.731 18:37:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:23:03.731 18:37:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:03.731 18:37:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2888902 00:23:03.731 18:37:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:03.731 18:37:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:03.731 18:37:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2888902' 00:23:03.731 killing process with pid 2888902 00:23:03.731 18:37:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@967 -- # kill 2888902 00:23:03.731 Received shutdown signal, test time was about 60.000000 seconds 00:23:03.731 00:23:03.731 Latency(us) 00:23:03.731 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.731 =================================================================================================================== 00:23:03.731 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:03.731 [2024-07-15 18:37:49.248655] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:03.731 18:37:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # wait 2888902 00:23:03.731 [2024-07-15 18:37:49.274216] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:23:03.989 00:23:03.989 real 0m23.597s 00:23:03.989 user 0m33.559s 00:23:03.989 sys 0m4.094s 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.989 ************************************ 00:23:03.989 END TEST raid_rebuild_test 00:23:03.989 ************************************ 00:23:03.989 18:37:49 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:03.989 18:37:49 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:23:03.989 18:37:49 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:23:03.989 18:37:49 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:03.989 18:37:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:03.989 ************************************ 00:23:03.989 START TEST raid_rebuild_test_sb 00:23:03.989 ************************************ 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=2892751 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 2892751 /var/tmp/spdk-raid.sock 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2892751 ']' 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:03.989 18:37:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:03.990 18:37:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:03.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:03.990 18:37:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:03.990 18:37:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.248 [2024-07-15 18:37:49.575926] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:23:04.248 [2024-07-15 18:37:49.575992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2892751 ] 00:23:04.248 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:04.248 Zero copy mechanism will not be used. 00:23:04.248 [2024-07-15 18:37:49.674129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.248 [2024-07-15 18:37:49.768868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.506 [2024-07-15 18:37:49.833272] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:04.506 [2024-07-15 18:37:49.833306] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:05.440 18:37:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:05.440 18:37:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:23:05.440 18:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:05.440 18:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:05.698 BaseBdev1_malloc 00:23:05.956 18:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:06.214 [2024-07-15 18:37:51.737403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:06.214 [2024-07-15 18:37:51.737449] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:06.214 [2024-07-15 18:37:51.737474] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26be130 00:23:06.214 [2024-07-15 18:37:51.737484] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:06.214 [2024-07-15 18:37:51.739199] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:06.214 [2024-07-15 18:37:51.739226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:06.214 BaseBdev1 00:23:06.471 18:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:06.471 18:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:06.730 BaseBdev2_malloc 00:23:06.730 18:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:07.297 [2024-07-15 18:37:52.736850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:07.297 [2024-07-15 18:37:52.736895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:07.297 [2024-07-15 18:37:52.736917] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2863fa0 00:23:07.297 [2024-07-15 18:37:52.736927] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:07.297 [2024-07-15 18:37:52.738515] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:07.297 [2024-07-15 18:37:52.738541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:07.297 BaseBdev2 00:23:07.297 18:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:07.865 spare_malloc 00:23:07.865 18:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:08.434 spare_delay 00:23:08.434 18:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:08.713 [2024-07-15 18:37:54.229342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:08.713 [2024-07-15 18:37:54.229385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:08.713 [2024-07-15 18:37:54.229407] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2865f40 00:23:08.713 [2024-07-15 18:37:54.229417] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:08.713 [2024-07-15 18:37:54.231054] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:08.713 [2024-07-15 18:37:54.231081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:08.713 spare 00:23:08.713 18:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:23:09.281 [2024-07-15 18:37:54.722674] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:09.281 [2024-07-15 18:37:54.724044] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:09.281 [2024-07-15 18:37:54.724207] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2867370 00:23:09.281 [2024-07-15 18:37:54.724219] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:09.281 [2024-07-15 18:37:54.724420] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x28661d0 00:23:09.281 [2024-07-15 18:37:54.724566] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2867370 00:23:09.281 [2024-07-15 18:37:54.724574] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2867370 00:23:09.281 [2024-07-15 18:37:54.724674] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:09.281 18:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:09.281 18:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:09.281 18:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:09.281 18:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:09.281 18:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:09.281 18:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:09.281 18:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:09.281 18:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:09.281 18:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:09.281 18:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:09.281 18:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.281 18:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.849 18:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:09.849 "name": "raid_bdev1", 00:23:09.849 "uuid": "d0a99b23-e25d-42ce-be13-a3c112cdcfcb", 00:23:09.849 "strip_size_kb": 0, 00:23:09.849 "state": "online", 00:23:09.849 "raid_level": "raid1", 00:23:09.849 "superblock": true, 00:23:09.849 "num_base_bdevs": 2, 00:23:09.849 "num_base_bdevs_discovered": 2, 00:23:09.849 "num_base_bdevs_operational": 2, 00:23:09.849 "base_bdevs_list": [ 00:23:09.849 { 00:23:09.849 "name": "BaseBdev1", 00:23:09.849 "uuid": "734b754b-db0d-5757-b1e7-c92472a84b97", 00:23:09.849 "is_configured": true, 00:23:09.849 "data_offset": 2048, 00:23:09.849 "data_size": 63488 00:23:09.849 }, 00:23:09.849 { 00:23:09.849 "name": "BaseBdev2", 00:23:09.849 "uuid": "a1e3eccc-d650-5fb4-ac05-4cdd652c77b0", 00:23:09.849 "is_configured": true, 00:23:09.849 "data_offset": 2048, 00:23:09.849 "data_size": 63488 00:23:09.849 } 00:23:09.849 ] 00:23:09.849 }' 00:23:09.849 18:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:09.849 18:37:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:10.418 18:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:10.418 18:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:23:10.676 [2024-07-15 18:37:56.098613] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:10.676 18:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:23:10.676 18:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.676 18:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:10.935 18:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:23:10.935 18:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:23:10.935 18:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:23:10.935 18:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:23:10.935 18:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:10.935 18:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:10.935 18:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:10.935 18:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:10.935 18:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:10.935 18:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:10.935 18:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:23:10.935 18:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:10.935 18:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:10.935 18:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:11.194 [2024-07-15 18:37:56.620050] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x28661d0 00:23:11.194 /dev/nbd0 00:23:11.194 18:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:11.194 18:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:11.194 18:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:23:11.194 18:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:23:11.194 18:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:11.194 18:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:11.194 18:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:23:11.194 18:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:23:11.194 18:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:11.194 18:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:11.194 18:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:11.194 1+0 records in 00:23:11.194 1+0 records out 00:23:11.194 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218011 s, 18.8 MB/s 00:23:11.194 18:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:11.194 18:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:23:11.194 18:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:11.194 18:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:11.194 18:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:23:11.194 18:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:11.194 18:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:11.194 18:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:23:11.194 18:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:23:11.194 18:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:23:16.467 63488+0 records in 00:23:16.467 63488+0 records out 00:23:16.467 32505856 bytes (33 MB, 31 MiB) copied, 4.99553 s, 6.5 MB/s 00:23:16.467 18:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:16.467 18:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:16.467 18:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:16.467 18:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:16.467 18:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:23:16.467 18:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:16.467 18:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:16.467 [2024-07-15 18:38:01.945620] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:16.467 18:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:16.467 18:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:16.467 18:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:16.467 18:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:16.467 18:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:16.467 18:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:16.467 18:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:16.467 18:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:16.467 18:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:16.726 [2024-07-15 18:38:02.190329] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:16.726 18:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:16.726 18:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:16.726 18:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:16.726 18:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:16.726 18:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:16.726 18:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:16.726 18:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:16.726 18:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:16.726 18:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:16.726 18:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:16.726 18:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.726 18:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.028 18:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:17.028 "name": "raid_bdev1", 00:23:17.028 "uuid": "d0a99b23-e25d-42ce-be13-a3c112cdcfcb", 00:23:17.028 "strip_size_kb": 0, 00:23:17.028 "state": "online", 00:23:17.028 "raid_level": "raid1", 00:23:17.028 "superblock": true, 00:23:17.028 "num_base_bdevs": 2, 00:23:17.028 "num_base_bdevs_discovered": 1, 00:23:17.028 "num_base_bdevs_operational": 1, 00:23:17.028 "base_bdevs_list": [ 00:23:17.028 { 00:23:17.028 "name": null, 00:23:17.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.028 "is_configured": false, 00:23:17.028 "data_offset": 2048, 00:23:17.028 "data_size": 63488 00:23:17.028 }, 00:23:17.028 { 00:23:17.028 "name": "BaseBdev2", 00:23:17.028 "uuid": "a1e3eccc-d650-5fb4-ac05-4cdd652c77b0", 00:23:17.028 "is_configured": true, 00:23:17.028 "data_offset": 2048, 00:23:17.028 "data_size": 63488 00:23:17.028 } 00:23:17.028 ] 00:23:17.028 }' 00:23:17.028 18:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:17.028 18:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:17.643 18:38:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:17.901 [2024-07-15 18:38:03.349456] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:17.901 [2024-07-15 18:38:03.354231] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x286fcf0 00:23:17.901 [2024-07-15 18:38:03.356279] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:17.901 18:38:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:23:18.838 18:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:18.838 18:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:18.838 18:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:18.838 18:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:18.838 18:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:18.838 18:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.838 18:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.096 18:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:19.096 "name": "raid_bdev1", 00:23:19.096 "uuid": "d0a99b23-e25d-42ce-be13-a3c112cdcfcb", 00:23:19.096 "strip_size_kb": 0, 00:23:19.096 "state": "online", 00:23:19.096 "raid_level": "raid1", 00:23:19.096 "superblock": true, 00:23:19.096 "num_base_bdevs": 2, 00:23:19.096 "num_base_bdevs_discovered": 2, 00:23:19.096 "num_base_bdevs_operational": 2, 00:23:19.096 "process": { 00:23:19.096 "type": "rebuild", 00:23:19.096 "target": "spare", 00:23:19.096 "progress": { 00:23:19.096 "blocks": 24576, 00:23:19.096 "percent": 38 00:23:19.096 } 00:23:19.096 }, 00:23:19.096 "base_bdevs_list": [ 00:23:19.096 { 00:23:19.096 "name": "spare", 00:23:19.096 "uuid": "7caf708d-b13f-56a2-a393-13c0fa47f492", 00:23:19.096 "is_configured": true, 00:23:19.096 "data_offset": 2048, 00:23:19.096 "data_size": 63488 00:23:19.096 }, 00:23:19.096 { 00:23:19.096 "name": "BaseBdev2", 00:23:19.096 "uuid": "a1e3eccc-d650-5fb4-ac05-4cdd652c77b0", 00:23:19.096 "is_configured": true, 00:23:19.096 "data_offset": 2048, 00:23:19.096 "data_size": 63488 00:23:19.096 } 00:23:19.096 ] 00:23:19.096 }' 00:23:19.096 18:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:19.354 18:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:19.354 18:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:19.354 18:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:19.354 18:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:19.612 [2024-07-15 18:38:04.967460] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:19.612 [2024-07-15 18:38:04.968325] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:19.612 [2024-07-15 18:38:04.968366] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:19.612 [2024-07-15 18:38:04.968380] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:19.612 [2024-07-15 18:38:04.968387] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:19.612 18:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:19.612 18:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:19.612 18:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:19.612 18:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:19.612 18:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:19.612 18:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:19.612 18:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:19.612 18:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:19.612 18:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:19.612 18:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:19.612 18:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.612 18:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.869 18:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:19.869 "name": "raid_bdev1", 00:23:19.869 "uuid": "d0a99b23-e25d-42ce-be13-a3c112cdcfcb", 00:23:19.869 "strip_size_kb": 0, 00:23:19.869 "state": "online", 00:23:19.869 "raid_level": "raid1", 00:23:19.869 "superblock": true, 00:23:19.869 "num_base_bdevs": 2, 00:23:19.869 "num_base_bdevs_discovered": 1, 00:23:19.869 "num_base_bdevs_operational": 1, 00:23:19.869 "base_bdevs_list": [ 00:23:19.869 { 00:23:19.869 "name": null, 00:23:19.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.869 "is_configured": false, 00:23:19.869 "data_offset": 2048, 00:23:19.869 "data_size": 63488 00:23:19.869 }, 00:23:19.869 { 00:23:19.869 "name": "BaseBdev2", 00:23:19.869 "uuid": "a1e3eccc-d650-5fb4-ac05-4cdd652c77b0", 00:23:19.869 "is_configured": true, 00:23:19.869 "data_offset": 2048, 00:23:19.869 "data_size": 63488 00:23:19.869 } 00:23:19.869 ] 00:23:19.869 }' 00:23:19.869 18:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:19.869 18:38:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.435 18:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:20.435 18:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:20.435 18:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:20.435 18:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:20.435 18:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:20.435 18:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:20.435 18:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:20.694 18:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:20.694 "name": "raid_bdev1", 00:23:20.694 "uuid": "d0a99b23-e25d-42ce-be13-a3c112cdcfcb", 00:23:20.694 "strip_size_kb": 0, 00:23:20.694 "state": "online", 00:23:20.694 "raid_level": "raid1", 00:23:20.694 "superblock": true, 00:23:20.694 "num_base_bdevs": 2, 00:23:20.694 "num_base_bdevs_discovered": 1, 00:23:20.694 "num_base_bdevs_operational": 1, 00:23:20.694 "base_bdevs_list": [ 00:23:20.694 { 00:23:20.694 "name": null, 00:23:20.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.694 "is_configured": false, 00:23:20.694 "data_offset": 2048, 00:23:20.694 "data_size": 63488 00:23:20.694 }, 00:23:20.694 { 00:23:20.694 "name": "BaseBdev2", 00:23:20.694 "uuid": "a1e3eccc-d650-5fb4-ac05-4cdd652c77b0", 00:23:20.694 "is_configured": true, 00:23:20.694 "data_offset": 2048, 00:23:20.694 "data_size": 63488 00:23:20.694 } 00:23:20.694 ] 00:23:20.694 }' 00:23:20.694 18:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:20.694 18:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:20.694 18:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:20.694 18:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:20.694 18:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:20.951 [2024-07-15 18:38:06.444739] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:20.951 [2024-07-15 18:38:06.449559] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x286fcf0 00:23:20.951 [2024-07-15 18:38:06.451066] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:20.951 18:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:22.325 18:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:22.325 18:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:22.325 18:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:22.325 18:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:22.325 18:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:22.325 18:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.325 18:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.325 18:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:22.325 "name": "raid_bdev1", 00:23:22.325 "uuid": "d0a99b23-e25d-42ce-be13-a3c112cdcfcb", 00:23:22.325 "strip_size_kb": 0, 00:23:22.325 "state": "online", 00:23:22.325 "raid_level": "raid1", 00:23:22.325 "superblock": true, 00:23:22.325 "num_base_bdevs": 2, 00:23:22.325 "num_base_bdevs_discovered": 2, 00:23:22.325 "num_base_bdevs_operational": 2, 00:23:22.325 "process": { 00:23:22.325 "type": "rebuild", 00:23:22.325 "target": "spare", 00:23:22.325 "progress": { 00:23:22.325 "blocks": 24576, 00:23:22.325 "percent": 38 00:23:22.325 } 00:23:22.325 }, 00:23:22.325 "base_bdevs_list": [ 00:23:22.325 { 00:23:22.325 "name": "spare", 00:23:22.325 "uuid": "7caf708d-b13f-56a2-a393-13c0fa47f492", 00:23:22.325 "is_configured": true, 00:23:22.325 "data_offset": 2048, 00:23:22.325 "data_size": 63488 00:23:22.325 }, 00:23:22.325 { 00:23:22.325 "name": "BaseBdev2", 00:23:22.325 "uuid": "a1e3eccc-d650-5fb4-ac05-4cdd652c77b0", 00:23:22.325 "is_configured": true, 00:23:22.325 "data_offset": 2048, 00:23:22.325 "data_size": 63488 00:23:22.325 } 00:23:22.325 ] 00:23:22.325 }' 00:23:22.325 18:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:22.325 18:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:22.325 18:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:22.325 18:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:22.325 18:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:23:22.325 18:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:23:22.325 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:23:22.325 18:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:23:22.325 18:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:23:22.325 18:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:23:22.325 18:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=846 00:23:22.325 18:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:22.325 18:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:22.325 18:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:22.325 18:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:22.325 18:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:22.325 18:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:22.325 18:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.325 18:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.584 18:38:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:22.584 "name": "raid_bdev1", 00:23:22.584 "uuid": "d0a99b23-e25d-42ce-be13-a3c112cdcfcb", 00:23:22.584 "strip_size_kb": 0, 00:23:22.584 "state": "online", 00:23:22.584 "raid_level": "raid1", 00:23:22.584 "superblock": true, 00:23:22.584 "num_base_bdevs": 2, 00:23:22.584 "num_base_bdevs_discovered": 2, 00:23:22.584 "num_base_bdevs_operational": 2, 00:23:22.584 "process": { 00:23:22.584 "type": "rebuild", 00:23:22.584 "target": "spare", 00:23:22.584 "progress": { 00:23:22.584 "blocks": 32768, 00:23:22.584 "percent": 51 00:23:22.584 } 00:23:22.584 }, 00:23:22.584 "base_bdevs_list": [ 00:23:22.584 { 00:23:22.584 "name": "spare", 00:23:22.584 "uuid": "7caf708d-b13f-56a2-a393-13c0fa47f492", 00:23:22.584 "is_configured": true, 00:23:22.584 "data_offset": 2048, 00:23:22.584 "data_size": 63488 00:23:22.584 }, 00:23:22.584 { 00:23:22.584 "name": "BaseBdev2", 00:23:22.584 "uuid": "a1e3eccc-d650-5fb4-ac05-4cdd652c77b0", 00:23:22.584 "is_configured": true, 00:23:22.584 "data_offset": 2048, 00:23:22.584 "data_size": 63488 00:23:22.584 } 00:23:22.584 ] 00:23:22.584 }' 00:23:22.584 18:38:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:22.584 18:38:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:22.584 18:38:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:22.844 18:38:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:22.844 18:38:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:23:23.781 18:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:23.781 18:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:23.781 18:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:23.781 18:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:23.781 18:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:23.781 18:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:23.781 18:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.781 18:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:24.040 18:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:24.040 "name": "raid_bdev1", 00:23:24.040 "uuid": "d0a99b23-e25d-42ce-be13-a3c112cdcfcb", 00:23:24.040 "strip_size_kb": 0, 00:23:24.040 "state": "online", 00:23:24.040 "raid_level": "raid1", 00:23:24.040 "superblock": true, 00:23:24.040 "num_base_bdevs": 2, 00:23:24.040 "num_base_bdevs_discovered": 2, 00:23:24.040 "num_base_bdevs_operational": 2, 00:23:24.040 "process": { 00:23:24.040 "type": "rebuild", 00:23:24.040 "target": "spare", 00:23:24.040 "progress": { 00:23:24.040 "blocks": 59392, 00:23:24.040 "percent": 93 00:23:24.040 } 00:23:24.040 }, 00:23:24.040 "base_bdevs_list": [ 00:23:24.040 { 00:23:24.040 "name": "spare", 00:23:24.040 "uuid": "7caf708d-b13f-56a2-a393-13c0fa47f492", 00:23:24.040 "is_configured": true, 00:23:24.040 "data_offset": 2048, 00:23:24.040 "data_size": 63488 00:23:24.040 }, 00:23:24.040 { 00:23:24.040 "name": "BaseBdev2", 00:23:24.040 "uuid": "a1e3eccc-d650-5fb4-ac05-4cdd652c77b0", 00:23:24.040 "is_configured": true, 00:23:24.040 "data_offset": 2048, 00:23:24.040 "data_size": 63488 00:23:24.040 } 00:23:24.040 ] 00:23:24.040 }' 00:23:24.040 18:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:24.040 18:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:24.040 18:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:24.040 18:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:24.040 18:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:23:24.040 [2024-07-15 18:38:09.574268] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:24.040 [2024-07-15 18:38:09.574324] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:24.040 [2024-07-15 18:38:09.574404] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:25.417 18:38:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:25.417 18:38:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:25.417 18:38:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:25.417 18:38:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:25.417 18:38:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:25.417 18:38:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:25.417 18:38:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.417 18:38:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:25.417 18:38:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:25.417 "name": "raid_bdev1", 00:23:25.417 "uuid": "d0a99b23-e25d-42ce-be13-a3c112cdcfcb", 00:23:25.417 "strip_size_kb": 0, 00:23:25.417 "state": "online", 00:23:25.417 "raid_level": "raid1", 00:23:25.417 "superblock": true, 00:23:25.417 "num_base_bdevs": 2, 00:23:25.417 "num_base_bdevs_discovered": 2, 00:23:25.417 "num_base_bdevs_operational": 2, 00:23:25.417 "base_bdevs_list": [ 00:23:25.417 { 00:23:25.417 "name": "spare", 00:23:25.417 "uuid": "7caf708d-b13f-56a2-a393-13c0fa47f492", 00:23:25.417 "is_configured": true, 00:23:25.417 "data_offset": 2048, 00:23:25.417 "data_size": 63488 00:23:25.417 }, 00:23:25.417 { 00:23:25.417 "name": "BaseBdev2", 00:23:25.417 "uuid": "a1e3eccc-d650-5fb4-ac05-4cdd652c77b0", 00:23:25.417 "is_configured": true, 00:23:25.418 "data_offset": 2048, 00:23:25.418 "data_size": 63488 00:23:25.418 } 00:23:25.418 ] 00:23:25.418 }' 00:23:25.418 18:38:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:25.418 18:38:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:25.418 18:38:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:25.418 18:38:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:23:25.418 18:38:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:23:25.418 18:38:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:25.418 18:38:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:25.418 18:38:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:25.418 18:38:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:25.418 18:38:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:25.418 18:38:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.418 18:38:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:25.677 18:38:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:25.677 "name": "raid_bdev1", 00:23:25.677 "uuid": "d0a99b23-e25d-42ce-be13-a3c112cdcfcb", 00:23:25.677 "strip_size_kb": 0, 00:23:25.677 "state": "online", 00:23:25.677 "raid_level": "raid1", 00:23:25.677 "superblock": true, 00:23:25.677 "num_base_bdevs": 2, 00:23:25.677 "num_base_bdevs_discovered": 2, 00:23:25.677 "num_base_bdevs_operational": 2, 00:23:25.677 "base_bdevs_list": [ 00:23:25.677 { 00:23:25.677 "name": "spare", 00:23:25.677 "uuid": "7caf708d-b13f-56a2-a393-13c0fa47f492", 00:23:25.677 "is_configured": true, 00:23:25.677 "data_offset": 2048, 00:23:25.677 "data_size": 63488 00:23:25.677 }, 00:23:25.677 { 00:23:25.677 "name": "BaseBdev2", 00:23:25.677 "uuid": "a1e3eccc-d650-5fb4-ac05-4cdd652c77b0", 00:23:25.677 "is_configured": true, 00:23:25.677 "data_offset": 2048, 00:23:25.677 "data_size": 63488 00:23:25.677 } 00:23:25.677 ] 00:23:25.677 }' 00:23:25.677 18:38:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:25.677 18:38:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:25.677 18:38:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:25.936 18:38:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:25.936 18:38:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:25.936 18:38:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:25.936 18:38:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:25.936 18:38:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:25.936 18:38:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:25.936 18:38:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:25.936 18:38:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:25.936 18:38:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:25.936 18:38:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:25.936 18:38:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:25.936 18:38:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.936 18:38:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.195 18:38:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:26.195 "name": "raid_bdev1", 00:23:26.195 "uuid": "d0a99b23-e25d-42ce-be13-a3c112cdcfcb", 00:23:26.195 "strip_size_kb": 0, 00:23:26.195 "state": "online", 00:23:26.195 "raid_level": "raid1", 00:23:26.195 "superblock": true, 00:23:26.195 "num_base_bdevs": 2, 00:23:26.195 "num_base_bdevs_discovered": 2, 00:23:26.195 "num_base_bdevs_operational": 2, 00:23:26.195 "base_bdevs_list": [ 00:23:26.195 { 00:23:26.195 "name": "spare", 00:23:26.195 "uuid": "7caf708d-b13f-56a2-a393-13c0fa47f492", 00:23:26.195 "is_configured": true, 00:23:26.195 "data_offset": 2048, 00:23:26.195 "data_size": 63488 00:23:26.195 }, 00:23:26.195 { 00:23:26.195 "name": "BaseBdev2", 00:23:26.195 "uuid": "a1e3eccc-d650-5fb4-ac05-4cdd652c77b0", 00:23:26.195 "is_configured": true, 00:23:26.195 "data_offset": 2048, 00:23:26.195 "data_size": 63488 00:23:26.195 } 00:23:26.195 ] 00:23:26.195 }' 00:23:26.195 18:38:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:26.195 18:38:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.763 18:38:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:27.022 [2024-07-15 18:38:12.362361] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:27.022 [2024-07-15 18:38:12.362387] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:27.022 [2024-07-15 18:38:12.362440] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:27.022 [2024-07-15 18:38:12.362494] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:27.022 [2024-07-15 18:38:12.362504] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2867370 name raid_bdev1, state offline 00:23:27.022 18:38:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.022 18:38:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:23:27.281 18:38:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:23:27.281 18:38:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:23:27.281 18:38:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:23:27.281 18:38:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:27.281 18:38:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:27.281 18:38:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:27.281 18:38:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:27.281 18:38:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:27.281 18:38:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:27.281 18:38:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:23:27.281 18:38:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:27.281 18:38:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:27.281 18:38:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:27.540 /dev/nbd0 00:23:27.540 18:38:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:27.540 18:38:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:27.540 18:38:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:23:27.540 18:38:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:23:27.540 18:38:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:27.540 18:38:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:27.540 18:38:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:23:27.540 18:38:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:23:27.540 18:38:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:27.540 18:38:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:27.540 18:38:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:27.540 1+0 records in 00:23:27.540 1+0 records out 00:23:27.540 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219078 s, 18.7 MB/s 00:23:27.540 18:38:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:27.540 18:38:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:23:27.540 18:38:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:27.540 18:38:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:27.540 18:38:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:23:27.540 18:38:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:27.540 18:38:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:27.540 18:38:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:27.798 /dev/nbd1 00:23:27.798 18:38:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:27.798 18:38:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:27.798 18:38:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:23:27.799 18:38:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:23:27.799 18:38:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:27.799 18:38:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:27.799 18:38:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:23:27.799 18:38:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:23:27.799 18:38:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:27.799 18:38:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:27.799 18:38:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:27.799 1+0 records in 00:23:27.799 1+0 records out 00:23:27.799 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024375 s, 16.8 MB/s 00:23:27.799 18:38:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:27.799 18:38:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:23:27.799 18:38:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:27.799 18:38:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:27.799 18:38:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:23:27.799 18:38:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:27.799 18:38:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:27.799 18:38:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:27.799 18:38:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:27.799 18:38:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:27.799 18:38:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:27.799 18:38:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:27.799 18:38:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:23:27.799 18:38:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:27.799 18:38:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:28.056 18:38:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:28.057 18:38:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:28.057 18:38:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:28.057 18:38:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:28.057 18:38:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:28.057 18:38:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:28.057 18:38:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:28.057 18:38:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:28.057 18:38:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:28.057 18:38:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:28.315 18:38:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:28.315 18:38:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:28.315 18:38:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:28.315 18:38:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:28.315 18:38:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:28.315 18:38:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:28.315 18:38:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:28.315 18:38:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:28.315 18:38:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:23:28.315 18:38:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:28.574 18:38:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:28.833 [2024-07-15 18:38:14.329647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:28.833 [2024-07-15 18:38:14.329686] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.833 [2024-07-15 18:38:14.329707] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26b72b0 00:23:28.833 [2024-07-15 18:38:14.329717] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.833 [2024-07-15 18:38:14.331397] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.833 [2024-07-15 18:38:14.331424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:28.833 [2024-07-15 18:38:14.331494] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:28.833 [2024-07-15 18:38:14.331517] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:28.833 [2024-07-15 18:38:14.331617] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:28.833 spare 00:23:28.833 18:38:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:28.833 18:38:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:28.833 18:38:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:28.833 18:38:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:28.833 18:38:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:28.833 18:38:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:28.833 18:38:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:28.833 18:38:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:28.833 18:38:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:28.833 18:38:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:28.833 18:38:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.833 18:38:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.092 [2024-07-15 18:38:14.431930] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x26b62b0 00:23:29.092 [2024-07-15 18:38:14.431945] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:29.092 [2024-07-15 18:38:14.432139] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2861ef0 00:23:29.092 [2024-07-15 18:38:14.432284] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x26b62b0 00:23:29.092 [2024-07-15 18:38:14.432292] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x26b62b0 00:23:29.092 [2024-07-15 18:38:14.432395] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:29.092 18:38:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:29.092 "name": "raid_bdev1", 00:23:29.092 "uuid": "d0a99b23-e25d-42ce-be13-a3c112cdcfcb", 00:23:29.092 "strip_size_kb": 0, 00:23:29.092 "state": "online", 00:23:29.092 "raid_level": "raid1", 00:23:29.092 "superblock": true, 00:23:29.092 "num_base_bdevs": 2, 00:23:29.092 "num_base_bdevs_discovered": 2, 00:23:29.092 "num_base_bdevs_operational": 2, 00:23:29.092 "base_bdevs_list": [ 00:23:29.092 { 00:23:29.092 "name": "spare", 00:23:29.092 "uuid": "7caf708d-b13f-56a2-a393-13c0fa47f492", 00:23:29.092 "is_configured": true, 00:23:29.092 "data_offset": 2048, 00:23:29.092 "data_size": 63488 00:23:29.092 }, 00:23:29.092 { 00:23:29.092 "name": "BaseBdev2", 00:23:29.092 "uuid": "a1e3eccc-d650-5fb4-ac05-4cdd652c77b0", 00:23:29.092 "is_configured": true, 00:23:29.092 "data_offset": 2048, 00:23:29.092 "data_size": 63488 00:23:29.092 } 00:23:29.092 ] 00:23:29.092 }' 00:23:29.092 18:38:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:29.092 18:38:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.030 18:38:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:30.030 18:38:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:30.030 18:38:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:30.030 18:38:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:30.030 18:38:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:30.030 18:38:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.030 18:38:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.030 18:38:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:30.030 "name": "raid_bdev1", 00:23:30.030 "uuid": "d0a99b23-e25d-42ce-be13-a3c112cdcfcb", 00:23:30.030 "strip_size_kb": 0, 00:23:30.030 "state": "online", 00:23:30.030 "raid_level": "raid1", 00:23:30.030 "superblock": true, 00:23:30.030 "num_base_bdevs": 2, 00:23:30.030 "num_base_bdevs_discovered": 2, 00:23:30.030 "num_base_bdevs_operational": 2, 00:23:30.030 "base_bdevs_list": [ 00:23:30.030 { 00:23:30.030 "name": "spare", 00:23:30.030 "uuid": "7caf708d-b13f-56a2-a393-13c0fa47f492", 00:23:30.030 "is_configured": true, 00:23:30.030 "data_offset": 2048, 00:23:30.030 "data_size": 63488 00:23:30.030 }, 00:23:30.030 { 00:23:30.030 "name": "BaseBdev2", 00:23:30.030 "uuid": "a1e3eccc-d650-5fb4-ac05-4cdd652c77b0", 00:23:30.030 "is_configured": true, 00:23:30.030 "data_offset": 2048, 00:23:30.030 "data_size": 63488 00:23:30.030 } 00:23:30.030 ] 00:23:30.030 }' 00:23:30.030 18:38:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:30.030 18:38:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:30.030 18:38:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:30.289 18:38:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:30.289 18:38:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.289 18:38:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:30.548 18:38:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:23:30.548 18:38:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:30.807 [2024-07-15 18:38:16.102531] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:30.807 18:38:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:30.807 18:38:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:30.807 18:38:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:30.807 18:38:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:30.807 18:38:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:30.807 18:38:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:30.808 18:38:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:30.808 18:38:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:30.808 18:38:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:30.808 18:38:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:30.808 18:38:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.808 18:38:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.067 18:38:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:31.067 "name": "raid_bdev1", 00:23:31.067 "uuid": "d0a99b23-e25d-42ce-be13-a3c112cdcfcb", 00:23:31.067 "strip_size_kb": 0, 00:23:31.067 "state": "online", 00:23:31.067 "raid_level": "raid1", 00:23:31.067 "superblock": true, 00:23:31.067 "num_base_bdevs": 2, 00:23:31.067 "num_base_bdevs_discovered": 1, 00:23:31.067 "num_base_bdevs_operational": 1, 00:23:31.067 "base_bdevs_list": [ 00:23:31.067 { 00:23:31.067 "name": null, 00:23:31.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.067 "is_configured": false, 00:23:31.067 "data_offset": 2048, 00:23:31.067 "data_size": 63488 00:23:31.067 }, 00:23:31.067 { 00:23:31.067 "name": "BaseBdev2", 00:23:31.067 "uuid": "a1e3eccc-d650-5fb4-ac05-4cdd652c77b0", 00:23:31.067 "is_configured": true, 00:23:31.067 "data_offset": 2048, 00:23:31.067 "data_size": 63488 00:23:31.067 } 00:23:31.067 ] 00:23:31.067 }' 00:23:31.067 18:38:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:31.067 18:38:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.635 18:38:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:31.909 [2024-07-15 18:38:17.241619] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:31.909 [2024-07-15 18:38:17.241763] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:31.909 [2024-07-15 18:38:17.241777] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:31.909 [2024-07-15 18:38:17.241800] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:31.909 [2024-07-15 18:38:17.246449] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2861ef0 00:23:31.909 [2024-07-15 18:38:17.247856] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:31.909 18:38:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:23:32.882 18:38:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:32.882 18:38:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:32.882 18:38:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:32.882 18:38:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:32.882 18:38:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:32.882 18:38:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.882 18:38:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.140 18:38:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:33.140 "name": "raid_bdev1", 00:23:33.140 "uuid": "d0a99b23-e25d-42ce-be13-a3c112cdcfcb", 00:23:33.140 "strip_size_kb": 0, 00:23:33.140 "state": "online", 00:23:33.140 "raid_level": "raid1", 00:23:33.140 "superblock": true, 00:23:33.140 "num_base_bdevs": 2, 00:23:33.140 "num_base_bdevs_discovered": 2, 00:23:33.140 "num_base_bdevs_operational": 2, 00:23:33.140 "process": { 00:23:33.140 "type": "rebuild", 00:23:33.140 "target": "spare", 00:23:33.140 "progress": { 00:23:33.140 "blocks": 24576, 00:23:33.140 "percent": 38 00:23:33.140 } 00:23:33.140 }, 00:23:33.140 "base_bdevs_list": [ 00:23:33.140 { 00:23:33.140 "name": "spare", 00:23:33.140 "uuid": "7caf708d-b13f-56a2-a393-13c0fa47f492", 00:23:33.140 "is_configured": true, 00:23:33.140 "data_offset": 2048, 00:23:33.140 "data_size": 63488 00:23:33.140 }, 00:23:33.140 { 00:23:33.140 "name": "BaseBdev2", 00:23:33.140 "uuid": "a1e3eccc-d650-5fb4-ac05-4cdd652c77b0", 00:23:33.140 "is_configured": true, 00:23:33.140 "data_offset": 2048, 00:23:33.140 "data_size": 63488 00:23:33.140 } 00:23:33.140 ] 00:23:33.140 }' 00:23:33.140 18:38:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:33.140 18:38:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:33.140 18:38:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:33.140 18:38:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:33.140 18:38:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:33.399 [2024-07-15 18:38:18.851975] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:33.399 [2024-07-15 18:38:18.859930] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:33.399 [2024-07-15 18:38:18.859975] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:33.399 [2024-07-15 18:38:18.859996] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:33.399 [2024-07-15 18:38:18.860002] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:33.399 18:38:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:33.399 18:38:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:33.399 18:38:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:33.399 18:38:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:33.399 18:38:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:33.399 18:38:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:33.399 18:38:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:33.399 18:38:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:33.399 18:38:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:33.399 18:38:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:33.399 18:38:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.399 18:38:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.658 18:38:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:33.658 "name": "raid_bdev1", 00:23:33.658 "uuid": "d0a99b23-e25d-42ce-be13-a3c112cdcfcb", 00:23:33.658 "strip_size_kb": 0, 00:23:33.658 "state": "online", 00:23:33.658 "raid_level": "raid1", 00:23:33.658 "superblock": true, 00:23:33.658 "num_base_bdevs": 2, 00:23:33.658 "num_base_bdevs_discovered": 1, 00:23:33.658 "num_base_bdevs_operational": 1, 00:23:33.658 "base_bdevs_list": [ 00:23:33.658 { 00:23:33.658 "name": null, 00:23:33.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.658 "is_configured": false, 00:23:33.658 "data_offset": 2048, 00:23:33.658 "data_size": 63488 00:23:33.658 }, 00:23:33.658 { 00:23:33.658 "name": "BaseBdev2", 00:23:33.658 "uuid": "a1e3eccc-d650-5fb4-ac05-4cdd652c77b0", 00:23:33.658 "is_configured": true, 00:23:33.658 "data_offset": 2048, 00:23:33.658 "data_size": 63488 00:23:33.658 } 00:23:33.658 ] 00:23:33.658 }' 00:23:33.658 18:38:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:33.658 18:38:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:34.593 18:38:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:34.593 [2024-07-15 18:38:20.015444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:34.593 [2024-07-15 18:38:20.015494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:34.593 [2024-07-15 18:38:20.015516] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2866f30 00:23:34.593 [2024-07-15 18:38:20.015526] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:34.593 [2024-07-15 18:38:20.015888] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:34.593 [2024-07-15 18:38:20.015903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:34.593 [2024-07-15 18:38:20.015987] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:34.593 [2024-07-15 18:38:20.015998] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:34.593 [2024-07-15 18:38:20.016005] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:34.593 [2024-07-15 18:38:20.016021] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:34.593 [2024-07-15 18:38:20.020732] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x28661d0 00:23:34.593 spare 00:23:34.593 [2024-07-15 18:38:20.022152] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:34.593 18:38:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:23:35.528 18:38:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:35.528 18:38:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:35.528 18:38:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:35.528 18:38:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:35.528 18:38:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:35.528 18:38:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:35.528 18:38:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.786 18:38:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:35.786 "name": "raid_bdev1", 00:23:35.786 "uuid": "d0a99b23-e25d-42ce-be13-a3c112cdcfcb", 00:23:35.786 "strip_size_kb": 0, 00:23:35.786 "state": "online", 00:23:35.786 "raid_level": "raid1", 00:23:35.786 "superblock": true, 00:23:35.786 "num_base_bdevs": 2, 00:23:35.786 "num_base_bdevs_discovered": 2, 00:23:35.786 "num_base_bdevs_operational": 2, 00:23:35.786 "process": { 00:23:35.786 "type": "rebuild", 00:23:35.786 "target": "spare", 00:23:35.786 "progress": { 00:23:35.786 "blocks": 22528, 00:23:35.786 "percent": 35 00:23:35.786 } 00:23:35.786 }, 00:23:35.786 "base_bdevs_list": [ 00:23:35.786 { 00:23:35.786 "name": "spare", 00:23:35.786 "uuid": "7caf708d-b13f-56a2-a393-13c0fa47f492", 00:23:35.786 "is_configured": true, 00:23:35.786 "data_offset": 2048, 00:23:35.786 "data_size": 63488 00:23:35.786 }, 00:23:35.786 { 00:23:35.786 "name": "BaseBdev2", 00:23:35.786 "uuid": "a1e3eccc-d650-5fb4-ac05-4cdd652c77b0", 00:23:35.786 "is_configured": true, 00:23:35.786 "data_offset": 2048, 00:23:35.786 "data_size": 63488 00:23:35.786 } 00:23:35.786 ] 00:23:35.786 }' 00:23:35.786 18:38:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:35.786 18:38:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:35.786 18:38:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:35.786 18:38:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:35.786 18:38:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:36.044 [2024-07-15 18:38:21.562000] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:36.302 [2024-07-15 18:38:21.634143] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:36.302 [2024-07-15 18:38:21.634183] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:36.302 [2024-07-15 18:38:21.634197] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:36.302 [2024-07-15 18:38:21.634203] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:36.302 18:38:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:36.302 18:38:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:36.302 18:38:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:36.302 18:38:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:36.302 18:38:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:36.302 18:38:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:36.302 18:38:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:36.302 18:38:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:36.302 18:38:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:36.302 18:38:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:36.302 18:38:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.302 18:38:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.561 18:38:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:36.561 "name": "raid_bdev1", 00:23:36.561 "uuid": "d0a99b23-e25d-42ce-be13-a3c112cdcfcb", 00:23:36.561 "strip_size_kb": 0, 00:23:36.561 "state": "online", 00:23:36.561 "raid_level": "raid1", 00:23:36.561 "superblock": true, 00:23:36.561 "num_base_bdevs": 2, 00:23:36.561 "num_base_bdevs_discovered": 1, 00:23:36.561 "num_base_bdevs_operational": 1, 00:23:36.561 "base_bdevs_list": [ 00:23:36.561 { 00:23:36.561 "name": null, 00:23:36.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.561 "is_configured": false, 00:23:36.561 "data_offset": 2048, 00:23:36.561 "data_size": 63488 00:23:36.561 }, 00:23:36.561 { 00:23:36.561 "name": "BaseBdev2", 00:23:36.561 "uuid": "a1e3eccc-d650-5fb4-ac05-4cdd652c77b0", 00:23:36.561 "is_configured": true, 00:23:36.561 "data_offset": 2048, 00:23:36.561 "data_size": 63488 00:23:36.561 } 00:23:36.561 ] 00:23:36.561 }' 00:23:36.561 18:38:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:36.561 18:38:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:37.127 18:38:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:37.127 18:38:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:37.127 18:38:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:37.127 18:38:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:37.127 18:38:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:37.127 18:38:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.127 18:38:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.385 18:38:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:37.385 "name": "raid_bdev1", 00:23:37.385 "uuid": "d0a99b23-e25d-42ce-be13-a3c112cdcfcb", 00:23:37.385 "strip_size_kb": 0, 00:23:37.385 "state": "online", 00:23:37.385 "raid_level": "raid1", 00:23:37.385 "superblock": true, 00:23:37.385 "num_base_bdevs": 2, 00:23:37.385 "num_base_bdevs_discovered": 1, 00:23:37.385 "num_base_bdevs_operational": 1, 00:23:37.385 "base_bdevs_list": [ 00:23:37.385 { 00:23:37.385 "name": null, 00:23:37.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.385 "is_configured": false, 00:23:37.385 "data_offset": 2048, 00:23:37.385 "data_size": 63488 00:23:37.385 }, 00:23:37.385 { 00:23:37.385 "name": "BaseBdev2", 00:23:37.385 "uuid": "a1e3eccc-d650-5fb4-ac05-4cdd652c77b0", 00:23:37.385 "is_configured": true, 00:23:37.385 "data_offset": 2048, 00:23:37.385 "data_size": 63488 00:23:37.385 } 00:23:37.385 ] 00:23:37.385 }' 00:23:37.385 18:38:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:37.385 18:38:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:37.385 18:38:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:37.386 18:38:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:37.386 18:38:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:23:37.644 18:38:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:37.906 [2024-07-15 18:38:23.302964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:37.906 [2024-07-15 18:38:23.303008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:37.906 [2024-07-15 18:38:23.303029] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26be360 00:23:37.906 [2024-07-15 18:38:23.303039] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:37.906 [2024-07-15 18:38:23.303371] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:37.906 [2024-07-15 18:38:23.303387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:37.906 [2024-07-15 18:38:23.303444] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:37.906 [2024-07-15 18:38:23.303454] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:37.906 [2024-07-15 18:38:23.303468] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:37.906 BaseBdev1 00:23:37.906 18:38:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:23:38.846 18:38:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:38.846 18:38:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:38.846 18:38:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:38.846 18:38:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:38.846 18:38:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:38.846 18:38:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:38.846 18:38:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:38.846 18:38:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:38.846 18:38:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:38.846 18:38:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:38.846 18:38:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.846 18:38:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.105 18:38:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:39.105 "name": "raid_bdev1", 00:23:39.105 "uuid": "d0a99b23-e25d-42ce-be13-a3c112cdcfcb", 00:23:39.105 "strip_size_kb": 0, 00:23:39.105 "state": "online", 00:23:39.105 "raid_level": "raid1", 00:23:39.105 "superblock": true, 00:23:39.105 "num_base_bdevs": 2, 00:23:39.105 "num_base_bdevs_discovered": 1, 00:23:39.105 "num_base_bdevs_operational": 1, 00:23:39.105 "base_bdevs_list": [ 00:23:39.105 { 00:23:39.105 "name": null, 00:23:39.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:39.105 "is_configured": false, 00:23:39.105 "data_offset": 2048, 00:23:39.105 "data_size": 63488 00:23:39.105 }, 00:23:39.105 { 00:23:39.105 "name": "BaseBdev2", 00:23:39.105 "uuid": "a1e3eccc-d650-5fb4-ac05-4cdd652c77b0", 00:23:39.105 "is_configured": true, 00:23:39.105 "data_offset": 2048, 00:23:39.105 "data_size": 63488 00:23:39.105 } 00:23:39.105 ] 00:23:39.105 }' 00:23:39.105 18:38:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:39.105 18:38:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:39.673 18:38:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:39.673 18:38:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:39.673 18:38:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:39.673 18:38:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:39.673 18:38:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:39.673 18:38:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.673 18:38:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.238 18:38:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:40.238 "name": "raid_bdev1", 00:23:40.238 "uuid": "d0a99b23-e25d-42ce-be13-a3c112cdcfcb", 00:23:40.238 "strip_size_kb": 0, 00:23:40.238 "state": "online", 00:23:40.238 "raid_level": "raid1", 00:23:40.238 "superblock": true, 00:23:40.238 "num_base_bdevs": 2, 00:23:40.238 "num_base_bdevs_discovered": 1, 00:23:40.238 "num_base_bdevs_operational": 1, 00:23:40.238 "base_bdevs_list": [ 00:23:40.238 { 00:23:40.238 "name": null, 00:23:40.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.238 "is_configured": false, 00:23:40.238 "data_offset": 2048, 00:23:40.238 "data_size": 63488 00:23:40.238 }, 00:23:40.238 { 00:23:40.238 "name": "BaseBdev2", 00:23:40.238 "uuid": "a1e3eccc-d650-5fb4-ac05-4cdd652c77b0", 00:23:40.238 "is_configured": true, 00:23:40.238 "data_offset": 2048, 00:23:40.238 "data_size": 63488 00:23:40.238 } 00:23:40.238 ] 00:23:40.238 }' 00:23:40.238 18:38:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:40.238 18:38:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:40.238 18:38:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:40.238 18:38:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:40.238 18:38:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:40.238 18:38:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:23:40.238 18:38:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:40.238 18:38:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:40.238 18:38:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:40.238 18:38:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:40.238 18:38:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:40.238 18:38:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:40.238 18:38:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:40.238 18:38:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:40.238 18:38:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:23:40.238 18:38:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:40.496 [2024-07-15 18:38:25.817926] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:40.496 [2024-07-15 18:38:25.818047] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:40.496 [2024-07-15 18:38:25.818060] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:40.496 request: 00:23:40.496 { 00:23:40.496 "base_bdev": "BaseBdev1", 00:23:40.496 "raid_bdev": "raid_bdev1", 00:23:40.496 "method": "bdev_raid_add_base_bdev", 00:23:40.496 "req_id": 1 00:23:40.496 } 00:23:40.496 Got JSON-RPC error response 00:23:40.496 response: 00:23:40.496 { 00:23:40.496 "code": -22, 00:23:40.496 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:40.496 } 00:23:40.496 18:38:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:23:40.496 18:38:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:40.496 18:38:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:40.496 18:38:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:40.496 18:38:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:23:41.431 18:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:41.431 18:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:41.431 18:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:41.431 18:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:41.431 18:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:41.431 18:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:41.431 18:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:41.431 18:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:41.431 18:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:41.431 18:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:41.432 18:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.432 18:38:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.690 18:38:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:41.690 "name": "raid_bdev1", 00:23:41.690 "uuid": "d0a99b23-e25d-42ce-be13-a3c112cdcfcb", 00:23:41.690 "strip_size_kb": 0, 00:23:41.690 "state": "online", 00:23:41.690 "raid_level": "raid1", 00:23:41.690 "superblock": true, 00:23:41.690 "num_base_bdevs": 2, 00:23:41.690 "num_base_bdevs_discovered": 1, 00:23:41.690 "num_base_bdevs_operational": 1, 00:23:41.690 "base_bdevs_list": [ 00:23:41.690 { 00:23:41.690 "name": null, 00:23:41.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:41.690 "is_configured": false, 00:23:41.690 "data_offset": 2048, 00:23:41.690 "data_size": 63488 00:23:41.690 }, 00:23:41.690 { 00:23:41.690 "name": "BaseBdev2", 00:23:41.690 "uuid": "a1e3eccc-d650-5fb4-ac05-4cdd652c77b0", 00:23:41.690 "is_configured": true, 00:23:41.690 "data_offset": 2048, 00:23:41.690 "data_size": 63488 00:23:41.690 } 00:23:41.690 ] 00:23:41.690 }' 00:23:41.690 18:38:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:41.690 18:38:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.257 18:38:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:42.257 18:38:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:42.257 18:38:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:42.257 18:38:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:42.257 18:38:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:42.257 18:38:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.257 18:38:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.516 18:38:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:42.516 "name": "raid_bdev1", 00:23:42.516 "uuid": "d0a99b23-e25d-42ce-be13-a3c112cdcfcb", 00:23:42.516 "strip_size_kb": 0, 00:23:42.516 "state": "online", 00:23:42.516 "raid_level": "raid1", 00:23:42.516 "superblock": true, 00:23:42.516 "num_base_bdevs": 2, 00:23:42.516 "num_base_bdevs_discovered": 1, 00:23:42.516 "num_base_bdevs_operational": 1, 00:23:42.516 "base_bdevs_list": [ 00:23:42.516 { 00:23:42.516 "name": null, 00:23:42.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.516 "is_configured": false, 00:23:42.516 "data_offset": 2048, 00:23:42.516 "data_size": 63488 00:23:42.516 }, 00:23:42.516 { 00:23:42.516 "name": "BaseBdev2", 00:23:42.516 "uuid": "a1e3eccc-d650-5fb4-ac05-4cdd652c77b0", 00:23:42.516 "is_configured": true, 00:23:42.516 "data_offset": 2048, 00:23:42.516 "data_size": 63488 00:23:42.516 } 00:23:42.516 ] 00:23:42.516 }' 00:23:42.516 18:38:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:42.516 18:38:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:42.516 18:38:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:42.774 18:38:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:42.774 18:38:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 2892751 00:23:42.774 18:38:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2892751 ']' 00:23:42.774 18:38:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 2892751 00:23:42.774 18:38:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:23:42.774 18:38:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:42.774 18:38:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2892751 00:23:42.774 18:38:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:42.774 18:38:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:42.774 18:38:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2892751' 00:23:42.774 killing process with pid 2892751 00:23:42.774 18:38:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 2892751 00:23:42.774 Received shutdown signal, test time was about 60.000000 seconds 00:23:42.774 00:23:42.774 Latency(us) 00:23:42.774 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.774 =================================================================================================================== 00:23:42.774 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:42.774 [2024-07-15 18:38:28.131669] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:42.774 [2024-07-15 18:38:28.131754] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:42.774 [2024-07-15 18:38:28.131793] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:42.774 [2024-07-15 18:38:28.131802] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x26b62b0 name raid_bdev1, state offline 00:23:42.774 18:38:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 2892751 00:23:42.775 [2024-07-15 18:38:28.157665] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:43.033 18:38:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:23:43.033 00:23:43.033 real 0m38.840s 00:23:43.033 user 0m59.533s 00:23:43.033 sys 0m5.461s 00:23:43.033 18:38:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:43.033 18:38:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.033 ************************************ 00:23:43.033 END TEST raid_rebuild_test_sb 00:23:43.033 ************************************ 00:23:43.033 18:38:28 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:43.033 18:38:28 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:23:43.033 18:38:28 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:23:43.033 18:38:28 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:43.033 18:38:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:43.033 ************************************ 00:23:43.033 START TEST raid_rebuild_test_io 00:23:43.033 ************************************ 00:23:43.033 18:38:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 false true true 00:23:43.033 18:38:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:23:43.033 18:38:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:23:43.033 18:38:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:23:43.033 18:38:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:23:43.033 18:38:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:23:43.033 18:38:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:23:43.033 18:38:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:43.033 18:38:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:23:43.033 18:38:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:43.033 18:38:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:43.033 18:38:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:23:43.033 18:38:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:43.033 18:38:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:43.033 18:38:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:43.033 18:38:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:23:43.033 18:38:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:23:43.033 18:38:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:23:43.033 18:38:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:23:43.033 18:38:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:23:43.033 18:38:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:23:43.033 18:38:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:23:43.033 18:38:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:23:43.033 18:38:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:23:43.033 18:38:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=2899170 00:23:43.033 18:38:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 2899170 /var/tmp/spdk-raid.sock 00:23:43.034 18:38:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:43.034 18:38:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@829 -- # '[' -z 2899170 ']' 00:23:43.034 18:38:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:43.034 18:38:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:43.034 18:38:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:43.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:43.034 18:38:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:43.034 18:38:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:43.034 [2024-07-15 18:38:28.459151] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:23:43.034 [2024-07-15 18:38:28.459209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2899170 ] 00:23:43.034 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:43.034 Zero copy mechanism will not be used. 00:23:43.034 [2024-07-15 18:38:28.556554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.293 [2024-07-15 18:38:28.652261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.293 [2024-07-15 18:38:28.713004] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:43.293 [2024-07-15 18:38:28.713038] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:44.229 18:38:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:44.229 18:38:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # return 0 00:23:44.229 18:38:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:44.229 18:38:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:44.488 BaseBdev1_malloc 00:23:44.488 18:38:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:45.053 [2024-07-15 18:38:30.384264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:45.053 [2024-07-15 18:38:30.384311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:45.053 [2024-07-15 18:38:30.384333] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19a0130 00:23:45.053 [2024-07-15 18:38:30.384343] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:45.053 [2024-07-15 18:38:30.386082] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:45.053 [2024-07-15 18:38:30.386109] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:45.053 BaseBdev1 00:23:45.053 18:38:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:45.053 18:38:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:45.311 BaseBdev2_malloc 00:23:45.311 18:38:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:45.878 [2024-07-15 18:38:31.142796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:45.878 [2024-07-15 18:38:31.142839] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:45.878 [2024-07-15 18:38:31.142858] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b45fa0 00:23:45.878 [2024-07-15 18:38:31.142868] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:45.878 [2024-07-15 18:38:31.144441] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:45.878 [2024-07-15 18:38:31.144466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:45.878 BaseBdev2 00:23:45.878 18:38:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:45.878 spare_malloc 00:23:46.136 18:38:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:46.394 spare_delay 00:23:46.394 18:38:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:46.653 [2024-07-15 18:38:32.162070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:46.653 [2024-07-15 18:38:32.162114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:46.653 [2024-07-15 18:38:32.162134] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b47f40 00:23:46.653 [2024-07-15 18:38:32.162143] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:46.653 [2024-07-15 18:38:32.163794] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:46.653 [2024-07-15 18:38:32.163821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:46.653 spare 00:23:46.653 18:38:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:23:47.222 [2024-07-15 18:38:32.643365] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:47.222 [2024-07-15 18:38:32.644728] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:47.222 [2024-07-15 18:38:32.644810] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b49370 00:23:47.222 [2024-07-15 18:38:32.644820] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:47.222 [2024-07-15 18:38:32.645036] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b481d0 00:23:47.222 [2024-07-15 18:38:32.645185] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b49370 00:23:47.222 [2024-07-15 18:38:32.645194] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1b49370 00:23:47.222 [2024-07-15 18:38:32.645311] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:47.222 18:38:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:47.222 18:38:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:47.222 18:38:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:47.222 18:38:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:47.222 18:38:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:47.222 18:38:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:47.222 18:38:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:47.222 18:38:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:47.222 18:38:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:47.222 18:38:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:47.222 18:38:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.222 18:38:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.514 18:38:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:47.514 "name": "raid_bdev1", 00:23:47.514 "uuid": "8893e78f-0df6-4126-8aab-80ce2403f6a3", 00:23:47.514 "strip_size_kb": 0, 00:23:47.514 "state": "online", 00:23:47.514 "raid_level": "raid1", 00:23:47.514 "superblock": false, 00:23:47.514 "num_base_bdevs": 2, 00:23:47.514 "num_base_bdevs_discovered": 2, 00:23:47.514 "num_base_bdevs_operational": 2, 00:23:47.514 "base_bdevs_list": [ 00:23:47.514 { 00:23:47.514 "name": "BaseBdev1", 00:23:47.514 "uuid": "3699e85d-68f4-5753-839b-d2bcaad2e601", 00:23:47.514 "is_configured": true, 00:23:47.514 "data_offset": 0, 00:23:47.514 "data_size": 65536 00:23:47.514 }, 00:23:47.514 { 00:23:47.514 "name": "BaseBdev2", 00:23:47.514 "uuid": "bdd5b5f4-91f9-515a-99f4-2457cb3e2b33", 00:23:47.514 "is_configured": true, 00:23:47.514 "data_offset": 0, 00:23:47.514 "data_size": 65536 00:23:47.514 } 00:23:47.514 ] 00:23:47.514 }' 00:23:47.514 18:38:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:47.514 18:38:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:48.080 18:38:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:48.080 18:38:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:23:48.338 [2024-07-15 18:38:33.718506] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:48.338 18:38:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:23:48.338 18:38:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.338 18:38:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:48.597 18:38:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:23:48.597 18:38:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:23:48.597 18:38:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:48.597 18:38:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:48.597 [2024-07-15 18:38:34.117319] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1997920 00:23:48.597 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:48.597 Zero copy mechanism will not be used. 00:23:48.597 Running I/O for 60 seconds... 00:23:48.856 [2024-07-15 18:38:34.170024] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:48.856 [2024-07-15 18:38:34.170189] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1997920 00:23:48.856 18:38:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:48.856 18:38:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:48.856 18:38:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:48.856 18:38:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:48.856 18:38:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:48.856 18:38:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:48.856 18:38:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:48.856 18:38:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:48.856 18:38:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:48.856 18:38:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:48.856 18:38:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.856 18:38:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.115 18:38:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:49.115 "name": "raid_bdev1", 00:23:49.115 "uuid": "8893e78f-0df6-4126-8aab-80ce2403f6a3", 00:23:49.115 "strip_size_kb": 0, 00:23:49.115 "state": "online", 00:23:49.115 "raid_level": "raid1", 00:23:49.115 "superblock": false, 00:23:49.115 "num_base_bdevs": 2, 00:23:49.115 "num_base_bdevs_discovered": 1, 00:23:49.115 "num_base_bdevs_operational": 1, 00:23:49.115 "base_bdevs_list": [ 00:23:49.115 { 00:23:49.115 "name": null, 00:23:49.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.115 "is_configured": false, 00:23:49.115 "data_offset": 0, 00:23:49.115 "data_size": 65536 00:23:49.115 }, 00:23:49.115 { 00:23:49.115 "name": "BaseBdev2", 00:23:49.115 "uuid": "bdd5b5f4-91f9-515a-99f4-2457cb3e2b33", 00:23:49.115 "is_configured": true, 00:23:49.115 "data_offset": 0, 00:23:49.115 "data_size": 65536 00:23:49.115 } 00:23:49.115 ] 00:23:49.115 }' 00:23:49.115 18:38:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:49.115 18:38:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:49.683 18:38:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:49.942 [2024-07-15 18:38:35.368305] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:49.942 [2024-07-15 18:38:35.413582] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x199ab10 00:23:49.942 [2024-07-15 18:38:35.415730] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:49.942 18:38:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:23:50.201 [2024-07-15 18:38:35.701655] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:50.770 [2024-07-15 18:38:36.070875] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:50.770 [2024-07-15 18:38:36.301141] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:50.770 [2024-07-15 18:38:36.301310] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:51.029 18:38:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:51.029 18:38:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:51.029 18:38:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:51.029 18:38:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:51.029 18:38:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:51.029 18:38:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.029 18:38:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.288 18:38:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:51.288 "name": "raid_bdev1", 00:23:51.288 "uuid": "8893e78f-0df6-4126-8aab-80ce2403f6a3", 00:23:51.288 "strip_size_kb": 0, 00:23:51.288 "state": "online", 00:23:51.288 "raid_level": "raid1", 00:23:51.288 "superblock": false, 00:23:51.288 "num_base_bdevs": 2, 00:23:51.288 "num_base_bdevs_discovered": 2, 00:23:51.288 "num_base_bdevs_operational": 2, 00:23:51.288 "process": { 00:23:51.288 "type": "rebuild", 00:23:51.288 "target": "spare", 00:23:51.288 "progress": { 00:23:51.288 "blocks": 12288, 00:23:51.288 "percent": 18 00:23:51.288 } 00:23:51.288 }, 00:23:51.288 "base_bdevs_list": [ 00:23:51.288 { 00:23:51.288 "name": "spare", 00:23:51.288 "uuid": "878e6f96-ba4b-563b-8ea8-772e835b4799", 00:23:51.288 "is_configured": true, 00:23:51.288 "data_offset": 0, 00:23:51.288 "data_size": 65536 00:23:51.288 }, 00:23:51.288 { 00:23:51.288 "name": "BaseBdev2", 00:23:51.288 "uuid": "bdd5b5f4-91f9-515a-99f4-2457cb3e2b33", 00:23:51.288 "is_configured": true, 00:23:51.288 "data_offset": 0, 00:23:51.288 "data_size": 65536 00:23:51.288 } 00:23:51.288 ] 00:23:51.288 }' 00:23:51.288 18:38:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:51.288 18:38:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:51.288 18:38:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:51.288 18:38:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:51.288 18:38:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:51.547 [2024-07-15 18:38:37.032719] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:51.806 [2024-07-15 18:38:37.114385] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:51.806 [2024-07-15 18:38:37.186964] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:51.806 [2024-07-15 18:38:37.206382] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:51.806 [2024-07-15 18:38:37.206408] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:51.806 [2024-07-15 18:38:37.206613] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:51.806 [2024-07-15 18:38:37.229766] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1997920 00:23:51.806 18:38:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:51.806 18:38:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:51.806 18:38:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:51.806 18:38:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:51.806 18:38:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:51.806 18:38:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:51.806 18:38:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:51.806 18:38:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:51.806 18:38:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:51.806 18:38:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:51.806 18:38:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.806 18:38:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.065 18:38:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:52.065 "name": "raid_bdev1", 00:23:52.065 "uuid": "8893e78f-0df6-4126-8aab-80ce2403f6a3", 00:23:52.065 "strip_size_kb": 0, 00:23:52.065 "state": "online", 00:23:52.065 "raid_level": "raid1", 00:23:52.065 "superblock": false, 00:23:52.065 "num_base_bdevs": 2, 00:23:52.065 "num_base_bdevs_discovered": 1, 00:23:52.065 "num_base_bdevs_operational": 1, 00:23:52.065 "base_bdevs_list": [ 00:23:52.065 { 00:23:52.065 "name": null, 00:23:52.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.065 "is_configured": false, 00:23:52.065 "data_offset": 0, 00:23:52.065 "data_size": 65536 00:23:52.065 }, 00:23:52.065 { 00:23:52.065 "name": "BaseBdev2", 00:23:52.065 "uuid": "bdd5b5f4-91f9-515a-99f4-2457cb3e2b33", 00:23:52.065 "is_configured": true, 00:23:52.065 "data_offset": 0, 00:23:52.065 "data_size": 65536 00:23:52.065 } 00:23:52.065 ] 00:23:52.065 }' 00:23:52.065 18:38:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:52.065 18:38:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:52.633 18:38:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:52.633 18:38:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:52.633 18:38:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:52.633 18:38:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:52.633 18:38:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:52.633 18:38:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.633 18:38:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:53.201 18:38:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:53.201 "name": "raid_bdev1", 00:23:53.201 "uuid": "8893e78f-0df6-4126-8aab-80ce2403f6a3", 00:23:53.201 "strip_size_kb": 0, 00:23:53.201 "state": "online", 00:23:53.201 "raid_level": "raid1", 00:23:53.201 "superblock": false, 00:23:53.201 "num_base_bdevs": 2, 00:23:53.201 "num_base_bdevs_discovered": 1, 00:23:53.201 "num_base_bdevs_operational": 1, 00:23:53.201 "base_bdevs_list": [ 00:23:53.201 { 00:23:53.201 "name": null, 00:23:53.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.201 "is_configured": false, 00:23:53.201 "data_offset": 0, 00:23:53.201 "data_size": 65536 00:23:53.201 }, 00:23:53.201 { 00:23:53.201 "name": "BaseBdev2", 00:23:53.201 "uuid": "bdd5b5f4-91f9-515a-99f4-2457cb3e2b33", 00:23:53.201 "is_configured": true, 00:23:53.201 "data_offset": 0, 00:23:53.201 "data_size": 65536 00:23:53.201 } 00:23:53.201 ] 00:23:53.201 }' 00:23:53.201 18:38:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:53.201 18:38:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:53.201 18:38:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:53.201 18:38:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:53.201 18:38:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:53.459 [2024-07-15 18:38:38.790573] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:53.459 18:38:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:53.459 [2024-07-15 18:38:38.844848] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x199a820 00:23:53.459 [2024-07-15 18:38:38.846380] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:53.459 [2024-07-15 18:38:38.966405] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:53.459 [2024-07-15 18:38:38.966693] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:53.718 [2024-07-15 18:38:39.196049] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:53.718 [2024-07-15 18:38:39.196199] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:53.976 [2024-07-15 18:38:39.520707] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:54.234 [2024-07-15 18:38:39.659239] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:54.493 18:38:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:54.493 18:38:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:54.493 18:38:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:54.493 18:38:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:54.493 18:38:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:54.493 18:38:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.493 18:38:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:54.493 [2024-07-15 18:38:40.000924] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:54.752 18:38:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:54.752 "name": "raid_bdev1", 00:23:54.752 "uuid": "8893e78f-0df6-4126-8aab-80ce2403f6a3", 00:23:54.752 "strip_size_kb": 0, 00:23:54.752 "state": "online", 00:23:54.752 "raid_level": "raid1", 00:23:54.752 "superblock": false, 00:23:54.752 "num_base_bdevs": 2, 00:23:54.752 "num_base_bdevs_discovered": 2, 00:23:54.752 "num_base_bdevs_operational": 2, 00:23:54.752 "process": { 00:23:54.752 "type": "rebuild", 00:23:54.752 "target": "spare", 00:23:54.752 "progress": { 00:23:54.752 "blocks": 14336, 00:23:54.752 "percent": 21 00:23:54.752 } 00:23:54.752 }, 00:23:54.752 "base_bdevs_list": [ 00:23:54.752 { 00:23:54.752 "name": "spare", 00:23:54.752 "uuid": "878e6f96-ba4b-563b-8ea8-772e835b4799", 00:23:54.752 "is_configured": true, 00:23:54.752 "data_offset": 0, 00:23:54.752 "data_size": 65536 00:23:54.752 }, 00:23:54.752 { 00:23:54.752 "name": "BaseBdev2", 00:23:54.752 "uuid": "bdd5b5f4-91f9-515a-99f4-2457cb3e2b33", 00:23:54.752 "is_configured": true, 00:23:54.752 "data_offset": 0, 00:23:54.752 "data_size": 65536 00:23:54.752 } 00:23:54.752 ] 00:23:54.752 }' 00:23:54.752 18:38:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:54.752 18:38:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:54.752 18:38:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:54.752 18:38:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:54.752 18:38:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:23:54.752 18:38:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:23:54.752 18:38:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:23:54.752 18:38:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:23:54.752 18:38:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=879 00:23:54.752 18:38:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:54.752 18:38:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:54.752 18:38:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:54.752 18:38:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:54.752 18:38:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:54.752 18:38:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:54.752 18:38:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.752 18:38:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.011 [2024-07-15 18:38:40.442635] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:55.011 [2024-07-15 18:38:40.442935] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:55.011 18:38:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:55.011 "name": "raid_bdev1", 00:23:55.011 "uuid": "8893e78f-0df6-4126-8aab-80ce2403f6a3", 00:23:55.011 "strip_size_kb": 0, 00:23:55.011 "state": "online", 00:23:55.011 "raid_level": "raid1", 00:23:55.011 "superblock": false, 00:23:55.011 "num_base_bdevs": 2, 00:23:55.011 "num_base_bdevs_discovered": 2, 00:23:55.011 "num_base_bdevs_operational": 2, 00:23:55.011 "process": { 00:23:55.011 "type": "rebuild", 00:23:55.011 "target": "spare", 00:23:55.011 "progress": { 00:23:55.011 "blocks": 20480, 00:23:55.011 "percent": 31 00:23:55.011 } 00:23:55.011 }, 00:23:55.011 "base_bdevs_list": [ 00:23:55.011 { 00:23:55.011 "name": "spare", 00:23:55.011 "uuid": "878e6f96-ba4b-563b-8ea8-772e835b4799", 00:23:55.011 "is_configured": true, 00:23:55.011 "data_offset": 0, 00:23:55.011 "data_size": 65536 00:23:55.011 }, 00:23:55.011 { 00:23:55.011 "name": "BaseBdev2", 00:23:55.011 "uuid": "bdd5b5f4-91f9-515a-99f4-2457cb3e2b33", 00:23:55.011 "is_configured": true, 00:23:55.011 "data_offset": 0, 00:23:55.011 "data_size": 65536 00:23:55.011 } 00:23:55.011 ] 00:23:55.011 }' 00:23:55.011 18:38:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:55.011 18:38:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:55.011 18:38:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:55.270 18:38:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:55.270 18:38:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:23:55.270 [2024-07-15 18:38:40.683266] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:56.206 [2024-07-15 18:38:41.419451] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:23:56.206 [2024-07-15 18:38:41.419736] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:23:56.206 18:38:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:56.206 18:38:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:56.206 18:38:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:56.206 18:38:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:56.206 18:38:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:56.206 18:38:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:56.206 18:38:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:56.206 18:38:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:56.206 [2024-07-15 18:38:41.631585] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:23:56.465 18:38:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:56.465 "name": "raid_bdev1", 00:23:56.465 "uuid": "8893e78f-0df6-4126-8aab-80ce2403f6a3", 00:23:56.465 "strip_size_kb": 0, 00:23:56.465 "state": "online", 00:23:56.465 "raid_level": "raid1", 00:23:56.465 "superblock": false, 00:23:56.465 "num_base_bdevs": 2, 00:23:56.465 "num_base_bdevs_discovered": 2, 00:23:56.465 "num_base_bdevs_operational": 2, 00:23:56.465 "process": { 00:23:56.465 "type": "rebuild", 00:23:56.465 "target": "spare", 00:23:56.465 "progress": { 00:23:56.465 "blocks": 36864, 00:23:56.465 "percent": 56 00:23:56.465 } 00:23:56.465 }, 00:23:56.465 "base_bdevs_list": [ 00:23:56.465 { 00:23:56.465 "name": "spare", 00:23:56.465 "uuid": "878e6f96-ba4b-563b-8ea8-772e835b4799", 00:23:56.465 "is_configured": true, 00:23:56.465 "data_offset": 0, 00:23:56.465 "data_size": 65536 00:23:56.465 }, 00:23:56.465 { 00:23:56.465 "name": "BaseBdev2", 00:23:56.465 "uuid": "bdd5b5f4-91f9-515a-99f4-2457cb3e2b33", 00:23:56.465 "is_configured": true, 00:23:56.465 "data_offset": 0, 00:23:56.465 "data_size": 65536 00:23:56.465 } 00:23:56.465 ] 00:23:56.465 }' 00:23:56.465 18:38:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:56.465 [2024-07-15 18:38:41.855699] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:23:56.465 18:38:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:56.465 18:38:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:56.465 18:38:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:56.465 18:38:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:23:57.032 [2024-07-15 18:38:42.406683] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:23:57.292 [2024-07-15 18:38:42.620268] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:23:57.292 [2024-07-15 18:38:42.739543] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:23:57.551 18:38:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:57.551 18:38:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:57.551 18:38:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:57.551 18:38:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:57.551 18:38:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:57.551 18:38:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:57.551 18:38:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.551 18:38:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.810 18:38:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:57.810 "name": "raid_bdev1", 00:23:57.810 "uuid": "8893e78f-0df6-4126-8aab-80ce2403f6a3", 00:23:57.810 "strip_size_kb": 0, 00:23:57.810 "state": "online", 00:23:57.810 "raid_level": "raid1", 00:23:57.810 "superblock": false, 00:23:57.810 "num_base_bdevs": 2, 00:23:57.810 "num_base_bdevs_discovered": 2, 00:23:57.810 "num_base_bdevs_operational": 2, 00:23:57.810 "process": { 00:23:57.810 "type": "rebuild", 00:23:57.810 "target": "spare", 00:23:57.810 "progress": { 00:23:57.811 "blocks": 57344, 00:23:57.811 "percent": 87 00:23:57.811 } 00:23:57.811 }, 00:23:57.811 "base_bdevs_list": [ 00:23:57.811 { 00:23:57.811 "name": "spare", 00:23:57.811 "uuid": "878e6f96-ba4b-563b-8ea8-772e835b4799", 00:23:57.811 "is_configured": true, 00:23:57.811 "data_offset": 0, 00:23:57.811 "data_size": 65536 00:23:57.811 }, 00:23:57.811 { 00:23:57.811 "name": "BaseBdev2", 00:23:57.811 "uuid": "bdd5b5f4-91f9-515a-99f4-2457cb3e2b33", 00:23:57.811 "is_configured": true, 00:23:57.811 "data_offset": 0, 00:23:57.811 "data_size": 65536 00:23:57.811 } 00:23:57.811 ] 00:23:57.811 }' 00:23:57.811 18:38:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:57.811 18:38:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:57.811 18:38:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:57.811 18:38:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:57.811 18:38:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:23:58.069 [2024-07-15 18:38:43.533244] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:58.328 [2024-07-15 18:38:43.642237] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:58.328 [2024-07-15 18:38:43.643292] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:58.895 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:58.895 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:58.895 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:58.895 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:58.895 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:58.895 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:58.895 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.895 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.154 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:59.154 "name": "raid_bdev1", 00:23:59.154 "uuid": "8893e78f-0df6-4126-8aab-80ce2403f6a3", 00:23:59.154 "strip_size_kb": 0, 00:23:59.154 "state": "online", 00:23:59.154 "raid_level": "raid1", 00:23:59.154 "superblock": false, 00:23:59.154 "num_base_bdevs": 2, 00:23:59.154 "num_base_bdevs_discovered": 2, 00:23:59.154 "num_base_bdevs_operational": 2, 00:23:59.154 "base_bdevs_list": [ 00:23:59.154 { 00:23:59.154 "name": "spare", 00:23:59.154 "uuid": "878e6f96-ba4b-563b-8ea8-772e835b4799", 00:23:59.154 "is_configured": true, 00:23:59.154 "data_offset": 0, 00:23:59.154 "data_size": 65536 00:23:59.154 }, 00:23:59.154 { 00:23:59.154 "name": "BaseBdev2", 00:23:59.154 "uuid": "bdd5b5f4-91f9-515a-99f4-2457cb3e2b33", 00:23:59.154 "is_configured": true, 00:23:59.154 "data_offset": 0, 00:23:59.154 "data_size": 65536 00:23:59.154 } 00:23:59.154 ] 00:23:59.154 }' 00:23:59.154 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:59.154 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:59.154 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:59.154 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:23:59.154 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:23:59.154 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:59.154 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:59.154 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:59.154 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:59.154 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:59.154 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.154 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:59.413 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:59.413 "name": "raid_bdev1", 00:23:59.413 "uuid": "8893e78f-0df6-4126-8aab-80ce2403f6a3", 00:23:59.413 "strip_size_kb": 0, 00:23:59.413 "state": "online", 00:23:59.413 "raid_level": "raid1", 00:23:59.413 "superblock": false, 00:23:59.413 "num_base_bdevs": 2, 00:23:59.413 "num_base_bdevs_discovered": 2, 00:23:59.413 "num_base_bdevs_operational": 2, 00:23:59.413 "base_bdevs_list": [ 00:23:59.413 { 00:23:59.413 "name": "spare", 00:23:59.413 "uuid": "878e6f96-ba4b-563b-8ea8-772e835b4799", 00:23:59.413 "is_configured": true, 00:23:59.413 "data_offset": 0, 00:23:59.413 "data_size": 65536 00:23:59.413 }, 00:23:59.413 { 00:23:59.413 "name": "BaseBdev2", 00:23:59.413 "uuid": "bdd5b5f4-91f9-515a-99f4-2457cb3e2b33", 00:23:59.413 "is_configured": true, 00:23:59.413 "data_offset": 0, 00:23:59.413 "data_size": 65536 00:23:59.413 } 00:23:59.413 ] 00:23:59.413 }' 00:23:59.413 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:59.413 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:59.413 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:59.413 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:59.413 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:59.413 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:59.413 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:59.413 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:59.413 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:59.413 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:59.414 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:59.414 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:59.414 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:59.414 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:59.414 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.414 18:38:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:59.673 18:38:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:59.673 "name": "raid_bdev1", 00:23:59.673 "uuid": "8893e78f-0df6-4126-8aab-80ce2403f6a3", 00:23:59.673 "strip_size_kb": 0, 00:23:59.673 "state": "online", 00:23:59.673 "raid_level": "raid1", 00:23:59.673 "superblock": false, 00:23:59.673 "num_base_bdevs": 2, 00:23:59.673 "num_base_bdevs_discovered": 2, 00:23:59.673 "num_base_bdevs_operational": 2, 00:23:59.673 "base_bdevs_list": [ 00:23:59.673 { 00:23:59.673 "name": "spare", 00:23:59.673 "uuid": "878e6f96-ba4b-563b-8ea8-772e835b4799", 00:23:59.673 "is_configured": true, 00:23:59.673 "data_offset": 0, 00:23:59.673 "data_size": 65536 00:23:59.673 }, 00:23:59.673 { 00:23:59.673 "name": "BaseBdev2", 00:23:59.673 "uuid": "bdd5b5f4-91f9-515a-99f4-2457cb3e2b33", 00:23:59.673 "is_configured": true, 00:23:59.673 "data_offset": 0, 00:23:59.673 "data_size": 65536 00:23:59.673 } 00:23:59.673 ] 00:23:59.673 }' 00:23:59.673 18:38:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:59.673 18:38:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:00.608 18:38:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:00.608 [2024-07-15 18:38:46.125644] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:00.608 [2024-07-15 18:38:46.125673] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:00.867 00:24:00.867 Latency(us) 00:24:00.867 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.867 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:24:00.867 raid_bdev1 : 12.04 91.86 275.57 0.00 0.00 14771.79 300.37 119337.94 00:24:00.867 =================================================================================================================== 00:24:00.867 Total : 91.86 275.57 0.00 0.00 14771.79 300.37 119337.94 00:24:00.867 [2024-07-15 18:38:46.194040] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:00.867 [2024-07-15 18:38:46.194066] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:00.867 [2024-07-15 18:38:46.194136] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:00.867 [2024-07-15 18:38:46.194145] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b49370 name raid_bdev1, state offline 00:24:00.867 0 00:24:00.867 18:38:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.867 18:38:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:24:01.125 18:38:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:24:01.125 18:38:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:24:01.125 18:38:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:01.125 18:38:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:24:01.125 18:38:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:01.125 18:38:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:24:01.125 18:38:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:01.125 18:38:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:01.125 18:38:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:01.125 18:38:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:24:01.125 18:38:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:01.125 18:38:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:01.125 18:38:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:24:01.385 /dev/nbd0 00:24:01.385 18:38:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:01.385 18:38:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:01.385 18:38:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:24:01.385 18:38:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:24:01.385 18:38:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:01.385 18:38:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:01.385 18:38:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:24:01.385 18:38:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:24:01.385 18:38:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:01.385 18:38:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:01.385 18:38:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:01.385 1+0 records in 00:24:01.385 1+0 records out 00:24:01.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238002 s, 17.2 MB/s 00:24:01.385 18:38:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:01.385 18:38:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:24:01.385 18:38:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:01.385 18:38:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:01.385 18:38:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:24:01.385 18:38:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:01.385 18:38:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:01.385 18:38:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:24:01.385 18:38:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:24:01.385 18:38:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:24:01.385 18:38:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:01.385 18:38:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:24:01.385 18:38:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:01.385 18:38:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:01.385 18:38:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:01.385 18:38:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:24:01.385 18:38:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:01.385 18:38:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:01.385 18:38:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:24:01.644 /dev/nbd1 00:24:01.644 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:01.644 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:01.644 18:38:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:24:01.644 18:38:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:24:01.644 18:38:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:01.644 18:38:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:01.644 18:38:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:24:01.644 18:38:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:24:01.644 18:38:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:01.644 18:38:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:01.644 18:38:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:01.644 1+0 records in 00:24:01.644 1+0 records out 00:24:01.644 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238421 s, 17.2 MB/s 00:24:01.644 18:38:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:01.644 18:38:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:24:01.644 18:38:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:01.644 18:38:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:01.644 18:38:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:24:01.644 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:01.644 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:01.644 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:01.644 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:24:01.644 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:01.644 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:01.644 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:01.644 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:24:01.644 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:01.644 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:01.976 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:01.976 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:01.976 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:01.977 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:01.977 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:01.977 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:01.977 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:24:01.977 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:01.977 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:01.977 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:01.977 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:01.977 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:01.977 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:24:01.977 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:01.977 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:02.236 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:02.236 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:02.236 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:02.236 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:02.236 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:02.236 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:02.236 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:24:02.236 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:02.236 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:24:02.236 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 2899170 00:24:02.236 18:38:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@948 -- # '[' -z 2899170 ']' 00:24:02.236 18:38:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # kill -0 2899170 00:24:02.236 18:38:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # uname 00:24:02.236 18:38:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:02.236 18:38:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2899170 00:24:02.236 18:38:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:02.236 18:38:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:02.236 18:38:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2899170' 00:24:02.236 killing process with pid 2899170 00:24:02.236 18:38:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@967 -- # kill 2899170 00:24:02.236 Received shutdown signal, test time was about 13.579087 seconds 00:24:02.236 00:24:02.236 Latency(us) 00:24:02.236 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:02.236 =================================================================================================================== 00:24:02.236 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:02.236 [2024-07-15 18:38:47.731935] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:02.236 18:38:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # wait 2899170 00:24:02.236 [2024-07-15 18:38:47.752350] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:02.495 18:38:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:24:02.495 00:24:02.495 real 0m19.562s 00:24:02.495 user 0m30.998s 00:24:02.495 sys 0m2.376s 00:24:02.495 18:38:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:02.495 18:38:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:02.495 ************************************ 00:24:02.495 END TEST raid_rebuild_test_io 00:24:02.495 ************************************ 00:24:02.495 18:38:47 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:24:02.495 18:38:47 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:24:02.495 18:38:47 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:24:02.495 18:38:47 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:02.495 18:38:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:02.495 ************************************ 00:24:02.495 START TEST raid_rebuild_test_sb_io 00:24:02.495 ************************************ 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true true true 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=2902382 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 2902382 /var/tmp/spdk-raid.sock 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@829 -- # '[' -z 2902382 ']' 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:02.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:02.495 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:02.755 [2024-07-15 18:38:48.067981] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:24:02.755 [2024-07-15 18:38:48.068041] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2902382 ] 00:24:02.755 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:02.755 Zero copy mechanism will not be used. 00:24:02.755 [2024-07-15 18:38:48.167460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.755 [2024-07-15 18:38:48.262815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.014 [2024-07-15 18:38:48.318193] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:03.014 [2024-07-15 18:38:48.318222] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:03.582 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:03.582 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # return 0 00:24:03.582 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:03.582 18:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:03.582 BaseBdev1_malloc 00:24:03.582 18:38:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:03.842 [2024-07-15 18:38:49.258378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:03.842 [2024-07-15 18:38:49.258427] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:03.842 [2024-07-15 18:38:49.258446] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbaa130 00:24:03.842 [2024-07-15 18:38:49.258455] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:03.842 [2024-07-15 18:38:49.260075] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:03.842 [2024-07-15 18:38:49.260102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:03.842 BaseBdev1 00:24:03.842 18:38:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:03.842 18:38:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:04.101 BaseBdev2_malloc 00:24:04.101 18:38:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:04.101 [2024-07-15 18:38:49.607859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:04.101 [2024-07-15 18:38:49.607901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:04.101 [2024-07-15 18:38:49.607917] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd4ffa0 00:24:04.102 [2024-07-15 18:38:49.607927] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:04.102 [2024-07-15 18:38:49.609362] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:04.102 [2024-07-15 18:38:49.609386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:04.102 BaseBdev2 00:24:04.102 18:38:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:04.360 spare_malloc 00:24:04.360 18:38:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:04.620 spare_delay 00:24:04.620 18:38:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:04.879 [2024-07-15 18:38:50.225768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:04.879 [2024-07-15 18:38:50.225812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:04.879 [2024-07-15 18:38:50.225830] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd51f40 00:24:04.879 [2024-07-15 18:38:50.225839] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:04.879 [2024-07-15 18:38:50.227428] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:04.879 [2024-07-15 18:38:50.227455] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:04.879 spare 00:24:04.879 18:38:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:24:05.138 [2024-07-15 18:38:50.486489] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:05.138 [2024-07-15 18:38:50.487750] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:05.138 [2024-07-15 18:38:50.487907] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xd53370 00:24:05.138 [2024-07-15 18:38:50.487919] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:05.138 [2024-07-15 18:38:50.488120] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd521d0 00:24:05.138 [2024-07-15 18:38:50.488262] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd53370 00:24:05.138 [2024-07-15 18:38:50.488270] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xd53370 00:24:05.138 [2024-07-15 18:38:50.488365] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:05.138 18:38:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:05.138 18:38:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:05.138 18:38:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:05.138 18:38:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:05.138 18:38:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:05.138 18:38:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:05.138 18:38:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:05.138 18:38:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:05.138 18:38:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:05.138 18:38:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:05.138 18:38:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.138 18:38:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.397 18:38:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:05.397 "name": "raid_bdev1", 00:24:05.397 "uuid": "4e2432ae-3d04-4a48-8fc4-ea1c0898bfef", 00:24:05.397 "strip_size_kb": 0, 00:24:05.397 "state": "online", 00:24:05.397 "raid_level": "raid1", 00:24:05.397 "superblock": true, 00:24:05.397 "num_base_bdevs": 2, 00:24:05.397 "num_base_bdevs_discovered": 2, 00:24:05.397 "num_base_bdevs_operational": 2, 00:24:05.397 "base_bdevs_list": [ 00:24:05.397 { 00:24:05.397 "name": "BaseBdev1", 00:24:05.397 "uuid": "355399a0-45a5-51b0-8188-ed685903d8d0", 00:24:05.397 "is_configured": true, 00:24:05.397 "data_offset": 2048, 00:24:05.397 "data_size": 63488 00:24:05.397 }, 00:24:05.397 { 00:24:05.397 "name": "BaseBdev2", 00:24:05.397 "uuid": "5889d7a0-592e-5d6e-afc8-402bcef83590", 00:24:05.397 "is_configured": true, 00:24:05.397 "data_offset": 2048, 00:24:05.397 "data_size": 63488 00:24:05.397 } 00:24:05.397 ] 00:24:05.397 }' 00:24:05.397 18:38:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:05.397 18:38:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:05.964 18:38:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:24:05.964 18:38:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:06.223 [2024-07-15 18:38:51.557600] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:06.223 18:38:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:24:06.223 18:38:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.223 18:38:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:06.483 18:38:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:24:06.483 18:38:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:24:06.483 18:38:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:06.483 18:38:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:06.483 [2024-07-15 18:38:51.986472] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:06.483 18:38:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:06.483 18:38:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:06.483 18:38:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:06.483 18:38:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:06.483 18:38:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:06.483 18:38:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:06.483 18:38:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:06.483 18:38:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:06.483 18:38:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:06.483 18:38:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:06.483 18:38:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.483 18:38:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.742 [2024-07-15 18:38:52.036667] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x883fc0 00:24:06.742 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:06.742 Zero copy mechanism will not be used. 00:24:06.742 Running I/O for 60 seconds... 00:24:06.742 18:38:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:06.742 "name": "raid_bdev1", 00:24:06.742 "uuid": "4e2432ae-3d04-4a48-8fc4-ea1c0898bfef", 00:24:06.742 "strip_size_kb": 0, 00:24:06.742 "state": "online", 00:24:06.742 "raid_level": "raid1", 00:24:06.742 "superblock": true, 00:24:06.742 "num_base_bdevs": 2, 00:24:06.742 "num_base_bdevs_discovered": 1, 00:24:06.742 "num_base_bdevs_operational": 1, 00:24:06.742 "base_bdevs_list": [ 00:24:06.742 { 00:24:06.742 "name": null, 00:24:06.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.742 "is_configured": false, 00:24:06.742 "data_offset": 2048, 00:24:06.742 "data_size": 63488 00:24:06.742 }, 00:24:06.742 { 00:24:06.742 "name": "BaseBdev2", 00:24:06.742 "uuid": "5889d7a0-592e-5d6e-afc8-402bcef83590", 00:24:06.742 "is_configured": true, 00:24:06.742 "data_offset": 2048, 00:24:06.742 "data_size": 63488 00:24:06.742 } 00:24:06.742 ] 00:24:06.742 }' 00:24:06.742 18:38:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:06.742 18:38:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:07.678 18:38:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:07.678 [2024-07-15 18:38:53.151329] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:07.678 18:38:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:24:07.678 [2024-07-15 18:38:53.214480] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc40510 00:24:07.678 [2024-07-15 18:38:53.216662] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:07.935 [2024-07-15 18:38:53.337030] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:07.935 [2024-07-15 18:38:53.337374] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:08.193 [2024-07-15 18:38:53.548903] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:08.193 [2024-07-15 18:38:53.549065] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:08.451 [2024-07-15 18:38:53.928913] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:08.709 [2024-07-15 18:38:54.177252] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:08.709 18:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:08.709 18:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:08.709 18:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:08.709 18:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:08.710 18:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:08.710 18:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.710 18:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.968 [2024-07-15 18:38:54.438316] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:08.968 18:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:08.968 "name": "raid_bdev1", 00:24:08.968 "uuid": "4e2432ae-3d04-4a48-8fc4-ea1c0898bfef", 00:24:08.968 "strip_size_kb": 0, 00:24:08.968 "state": "online", 00:24:08.968 "raid_level": "raid1", 00:24:08.968 "superblock": true, 00:24:08.968 "num_base_bdevs": 2, 00:24:08.968 "num_base_bdevs_discovered": 2, 00:24:08.968 "num_base_bdevs_operational": 2, 00:24:08.968 "process": { 00:24:08.968 "type": "rebuild", 00:24:08.968 "target": "spare", 00:24:08.968 "progress": { 00:24:08.968 "blocks": 14336, 00:24:08.968 "percent": 22 00:24:08.968 } 00:24:08.968 }, 00:24:08.968 "base_bdevs_list": [ 00:24:08.968 { 00:24:08.968 "name": "spare", 00:24:08.968 "uuid": "44cae396-affd-54dc-b666-9f434e24e9c8", 00:24:08.968 "is_configured": true, 00:24:08.968 "data_offset": 2048, 00:24:08.968 "data_size": 63488 00:24:08.968 }, 00:24:08.968 { 00:24:08.968 "name": "BaseBdev2", 00:24:08.968 "uuid": "5889d7a0-592e-5d6e-afc8-402bcef83590", 00:24:08.968 "is_configured": true, 00:24:08.968 "data_offset": 2048, 00:24:08.968 "data_size": 63488 00:24:08.968 } 00:24:08.968 ] 00:24:08.968 }' 00:24:08.968 18:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:09.226 18:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:09.226 18:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:09.226 [2024-07-15 18:38:54.539681] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:09.226 [2024-07-15 18:38:54.539901] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:09.226 18:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:09.226 18:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:09.484 [2024-07-15 18:38:54.791614] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:24:09.484 [2024-07-15 18:38:54.792029] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:24:09.484 [2024-07-15 18:38:54.807693] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:09.484 [2024-07-15 18:38:54.903494] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:24:09.485 [2024-07-15 18:38:54.904414] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:09.485 [2024-07-15 18:38:54.914650] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:09.485 [2024-07-15 18:38:54.914675] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:09.485 [2024-07-15 18:38:54.914683] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:09.485 [2024-07-15 18:38:54.929044] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x883fc0 00:24:09.485 18:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:09.485 18:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:09.485 18:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:09.485 18:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:09.485 18:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:09.485 18:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:09.485 18:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:09.485 18:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:09.485 18:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:09.485 18:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:09.485 18:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.485 18:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.743 18:38:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:09.743 "name": "raid_bdev1", 00:24:09.743 "uuid": "4e2432ae-3d04-4a48-8fc4-ea1c0898bfef", 00:24:09.743 "strip_size_kb": 0, 00:24:09.743 "state": "online", 00:24:09.743 "raid_level": "raid1", 00:24:09.743 "superblock": true, 00:24:09.743 "num_base_bdevs": 2, 00:24:09.743 "num_base_bdevs_discovered": 1, 00:24:09.743 "num_base_bdevs_operational": 1, 00:24:09.743 "base_bdevs_list": [ 00:24:09.743 { 00:24:09.743 "name": null, 00:24:09.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:09.743 "is_configured": false, 00:24:09.743 "data_offset": 2048, 00:24:09.743 "data_size": 63488 00:24:09.743 }, 00:24:09.743 { 00:24:09.743 "name": "BaseBdev2", 00:24:09.743 "uuid": "5889d7a0-592e-5d6e-afc8-402bcef83590", 00:24:09.743 "is_configured": true, 00:24:09.743 "data_offset": 2048, 00:24:09.743 "data_size": 63488 00:24:09.743 } 00:24:09.743 ] 00:24:09.743 }' 00:24:09.743 18:38:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:09.743 18:38:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:10.677 18:38:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:10.677 18:38:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:10.677 18:38:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:10.677 18:38:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:10.677 18:38:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:10.677 18:38:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.677 18:38:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.677 18:38:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:10.677 "name": "raid_bdev1", 00:24:10.677 "uuid": "4e2432ae-3d04-4a48-8fc4-ea1c0898bfef", 00:24:10.677 "strip_size_kb": 0, 00:24:10.677 "state": "online", 00:24:10.677 "raid_level": "raid1", 00:24:10.677 "superblock": true, 00:24:10.677 "num_base_bdevs": 2, 00:24:10.677 "num_base_bdevs_discovered": 1, 00:24:10.677 "num_base_bdevs_operational": 1, 00:24:10.677 "base_bdevs_list": [ 00:24:10.677 { 00:24:10.677 "name": null, 00:24:10.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:10.677 "is_configured": false, 00:24:10.677 "data_offset": 2048, 00:24:10.677 "data_size": 63488 00:24:10.677 }, 00:24:10.677 { 00:24:10.677 "name": "BaseBdev2", 00:24:10.677 "uuid": "5889d7a0-592e-5d6e-afc8-402bcef83590", 00:24:10.677 "is_configured": true, 00:24:10.677 "data_offset": 2048, 00:24:10.677 "data_size": 63488 00:24:10.677 } 00:24:10.677 ] 00:24:10.677 }' 00:24:10.677 18:38:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:10.936 18:38:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:10.936 18:38:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:10.936 18:38:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:10.936 18:38:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:11.194 [2024-07-15 18:38:56.561547] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:11.194 18:38:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:11.194 [2024-07-15 18:38:56.642229] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xba9520 00:24:11.194 [2024-07-15 18:38:56.643733] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:11.453 [2024-07-15 18:38:56.754241] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:11.453 [2024-07-15 18:38:56.754526] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:11.453 [2024-07-15 18:38:56.901680] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:11.453 [2024-07-15 18:38:56.901812] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:12.021 [2024-07-15 18:38:57.273617] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:12.021 [2024-07-15 18:38:57.403123] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:12.021 [2024-07-15 18:38:57.403308] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:12.280 18:38:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:12.280 18:38:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:12.280 18:38:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:12.280 18:38:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:12.280 18:38:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:12.280 18:38:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.280 18:38:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.280 [2024-07-15 18:38:57.743917] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:12.539 [2024-07-15 18:38:57.855399] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:12.539 18:38:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:12.539 "name": "raid_bdev1", 00:24:12.539 "uuid": "4e2432ae-3d04-4a48-8fc4-ea1c0898bfef", 00:24:12.539 "strip_size_kb": 0, 00:24:12.539 "state": "online", 00:24:12.539 "raid_level": "raid1", 00:24:12.539 "superblock": true, 00:24:12.539 "num_base_bdevs": 2, 00:24:12.539 "num_base_bdevs_discovered": 2, 00:24:12.539 "num_base_bdevs_operational": 2, 00:24:12.539 "process": { 00:24:12.539 "type": "rebuild", 00:24:12.539 "target": "spare", 00:24:12.539 "progress": { 00:24:12.539 "blocks": 16384, 00:24:12.539 "percent": 25 00:24:12.539 } 00:24:12.539 }, 00:24:12.539 "base_bdevs_list": [ 00:24:12.539 { 00:24:12.539 "name": "spare", 00:24:12.539 "uuid": "44cae396-affd-54dc-b666-9f434e24e9c8", 00:24:12.539 "is_configured": true, 00:24:12.539 "data_offset": 2048, 00:24:12.539 "data_size": 63488 00:24:12.539 }, 00:24:12.539 { 00:24:12.539 "name": "BaseBdev2", 00:24:12.539 "uuid": "5889d7a0-592e-5d6e-afc8-402bcef83590", 00:24:12.539 "is_configured": true, 00:24:12.539 "data_offset": 2048, 00:24:12.539 "data_size": 63488 00:24:12.539 } 00:24:12.539 ] 00:24:12.539 }' 00:24:12.539 18:38:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:12.539 18:38:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:12.539 18:38:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:12.539 18:38:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:12.539 18:38:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:24:12.539 18:38:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:24:12.539 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:24:12.540 18:38:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:24:12.540 18:38:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:24:12.540 18:38:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:24:12.540 18:38:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=896 00:24:12.540 18:38:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:12.540 18:38:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:12.540 18:38:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:12.540 18:38:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:12.540 18:38:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:12.540 18:38:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:12.540 18:38:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.540 18:38:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.798 [2024-07-15 18:38:58.097098] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:24:12.798 [2024-07-15 18:38:58.225996] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:12.798 [2024-07-15 18:38:58.226223] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:12.798 18:38:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:12.798 "name": "raid_bdev1", 00:24:12.798 "uuid": "4e2432ae-3d04-4a48-8fc4-ea1c0898bfef", 00:24:12.798 "strip_size_kb": 0, 00:24:12.798 "state": "online", 00:24:12.798 "raid_level": "raid1", 00:24:12.798 "superblock": true, 00:24:12.798 "num_base_bdevs": 2, 00:24:12.798 "num_base_bdevs_discovered": 2, 00:24:12.798 "num_base_bdevs_operational": 2, 00:24:12.798 "process": { 00:24:12.798 "type": "rebuild", 00:24:12.798 "target": "spare", 00:24:12.798 "progress": { 00:24:12.798 "blocks": 22528, 00:24:12.798 "percent": 35 00:24:12.798 } 00:24:12.798 }, 00:24:12.798 "base_bdevs_list": [ 00:24:12.798 { 00:24:12.798 "name": "spare", 00:24:12.798 "uuid": "44cae396-affd-54dc-b666-9f434e24e9c8", 00:24:12.798 "is_configured": true, 00:24:12.798 "data_offset": 2048, 00:24:12.798 "data_size": 63488 00:24:12.798 }, 00:24:12.798 { 00:24:12.798 "name": "BaseBdev2", 00:24:12.798 "uuid": "5889d7a0-592e-5d6e-afc8-402bcef83590", 00:24:12.798 "is_configured": true, 00:24:12.798 "data_offset": 2048, 00:24:12.798 "data_size": 63488 00:24:12.798 } 00:24:12.798 ] 00:24:12.798 }' 00:24:12.798 18:38:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:12.798 18:38:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:12.798 18:38:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:13.057 18:38:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:13.057 18:38:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:24:13.057 [2024-07-15 18:38:58.566760] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:24:13.316 [2024-07-15 18:38:58.796337] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:24:13.575 [2024-07-15 18:38:59.020048] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:24:13.834 [2024-07-15 18:38:59.240638] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:24:13.834 18:38:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:13.834 18:38:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:13.834 18:38:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:13.834 18:38:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:13.834 18:38:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:13.834 18:38:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:13.834 18:38:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.834 18:38:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.093 [2024-07-15 18:38:59.543705] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:24:14.093 18:38:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:14.093 "name": "raid_bdev1", 00:24:14.093 "uuid": "4e2432ae-3d04-4a48-8fc4-ea1c0898bfef", 00:24:14.093 "strip_size_kb": 0, 00:24:14.093 "state": "online", 00:24:14.093 "raid_level": "raid1", 00:24:14.093 "superblock": true, 00:24:14.093 "num_base_bdevs": 2, 00:24:14.093 "num_base_bdevs_discovered": 2, 00:24:14.093 "num_base_bdevs_operational": 2, 00:24:14.093 "process": { 00:24:14.093 "type": "rebuild", 00:24:14.093 "target": "spare", 00:24:14.093 "progress": { 00:24:14.093 "blocks": 38912, 00:24:14.093 "percent": 61 00:24:14.093 } 00:24:14.093 }, 00:24:14.093 "base_bdevs_list": [ 00:24:14.093 { 00:24:14.093 "name": "spare", 00:24:14.093 "uuid": "44cae396-affd-54dc-b666-9f434e24e9c8", 00:24:14.093 "is_configured": true, 00:24:14.093 "data_offset": 2048, 00:24:14.093 "data_size": 63488 00:24:14.093 }, 00:24:14.093 { 00:24:14.093 "name": "BaseBdev2", 00:24:14.093 "uuid": "5889d7a0-592e-5d6e-afc8-402bcef83590", 00:24:14.093 "is_configured": true, 00:24:14.093 "data_offset": 2048, 00:24:14.093 "data_size": 63488 00:24:14.093 } 00:24:14.093 ] 00:24:14.093 }' 00:24:14.093 18:38:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:14.351 [2024-07-15 18:38:59.663836] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:24:14.351 18:38:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:14.351 18:38:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:14.351 18:38:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:14.352 18:38:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:24:15.288 18:39:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:15.288 18:39:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:15.288 18:39:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:15.288 18:39:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:15.288 18:39:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:15.288 18:39:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:15.288 18:39:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.288 18:39:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:15.547 18:39:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:15.547 "name": "raid_bdev1", 00:24:15.547 "uuid": "4e2432ae-3d04-4a48-8fc4-ea1c0898bfef", 00:24:15.547 "strip_size_kb": 0, 00:24:15.547 "state": "online", 00:24:15.547 "raid_level": "raid1", 00:24:15.547 "superblock": true, 00:24:15.547 "num_base_bdevs": 2, 00:24:15.547 "num_base_bdevs_discovered": 2, 00:24:15.547 "num_base_bdevs_operational": 2, 00:24:15.547 "process": { 00:24:15.547 "type": "rebuild", 00:24:15.547 "target": "spare", 00:24:15.547 "progress": { 00:24:15.547 "blocks": 61440, 00:24:15.547 "percent": 96 00:24:15.547 } 00:24:15.547 }, 00:24:15.547 "base_bdevs_list": [ 00:24:15.547 { 00:24:15.547 "name": "spare", 00:24:15.547 "uuid": "44cae396-affd-54dc-b666-9f434e24e9c8", 00:24:15.547 "is_configured": true, 00:24:15.547 "data_offset": 2048, 00:24:15.547 "data_size": 63488 00:24:15.547 }, 00:24:15.547 { 00:24:15.547 "name": "BaseBdev2", 00:24:15.547 "uuid": "5889d7a0-592e-5d6e-afc8-402bcef83590", 00:24:15.547 "is_configured": true, 00:24:15.547 "data_offset": 2048, 00:24:15.547 "data_size": 63488 00:24:15.547 } 00:24:15.547 ] 00:24:15.547 }' 00:24:15.547 18:39:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:15.547 [2024-07-15 18:39:01.001698] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:15.547 18:39:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:15.547 18:39:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:15.547 18:39:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:15.547 18:39:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:24:15.806 [2024-07-15 18:39:01.102009] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:15.806 [2024-07-15 18:39:01.103329] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:16.781 18:39:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:16.781 18:39:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:16.781 18:39:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:16.781 18:39:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:16.782 18:39:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:16.782 18:39:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:16.782 18:39:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.782 18:39:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:17.041 18:39:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:17.041 "name": "raid_bdev1", 00:24:17.041 "uuid": "4e2432ae-3d04-4a48-8fc4-ea1c0898bfef", 00:24:17.041 "strip_size_kb": 0, 00:24:17.041 "state": "online", 00:24:17.041 "raid_level": "raid1", 00:24:17.041 "superblock": true, 00:24:17.041 "num_base_bdevs": 2, 00:24:17.041 "num_base_bdevs_discovered": 2, 00:24:17.041 "num_base_bdevs_operational": 2, 00:24:17.041 "base_bdevs_list": [ 00:24:17.041 { 00:24:17.041 "name": "spare", 00:24:17.041 "uuid": "44cae396-affd-54dc-b666-9f434e24e9c8", 00:24:17.041 "is_configured": true, 00:24:17.041 "data_offset": 2048, 00:24:17.041 "data_size": 63488 00:24:17.041 }, 00:24:17.041 { 00:24:17.041 "name": "BaseBdev2", 00:24:17.041 "uuid": "5889d7a0-592e-5d6e-afc8-402bcef83590", 00:24:17.041 "is_configured": true, 00:24:17.041 "data_offset": 2048, 00:24:17.041 "data_size": 63488 00:24:17.041 } 00:24:17.041 ] 00:24:17.041 }' 00:24:17.041 18:39:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:17.041 18:39:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:17.041 18:39:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:17.041 18:39:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:24:17.041 18:39:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:24:17.041 18:39:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:17.041 18:39:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:17.041 18:39:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:17.041 18:39:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:17.041 18:39:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:17.041 18:39:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.041 18:39:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:17.609 18:39:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:17.609 "name": "raid_bdev1", 00:24:17.609 "uuid": "4e2432ae-3d04-4a48-8fc4-ea1c0898bfef", 00:24:17.609 "strip_size_kb": 0, 00:24:17.609 "state": "online", 00:24:17.609 "raid_level": "raid1", 00:24:17.609 "superblock": true, 00:24:17.609 "num_base_bdevs": 2, 00:24:17.609 "num_base_bdevs_discovered": 2, 00:24:17.609 "num_base_bdevs_operational": 2, 00:24:17.609 "base_bdevs_list": [ 00:24:17.609 { 00:24:17.609 "name": "spare", 00:24:17.609 "uuid": "44cae396-affd-54dc-b666-9f434e24e9c8", 00:24:17.609 "is_configured": true, 00:24:17.609 "data_offset": 2048, 00:24:17.609 "data_size": 63488 00:24:17.609 }, 00:24:17.609 { 00:24:17.609 "name": "BaseBdev2", 00:24:17.609 "uuid": "5889d7a0-592e-5d6e-afc8-402bcef83590", 00:24:17.609 "is_configured": true, 00:24:17.609 "data_offset": 2048, 00:24:17.609 "data_size": 63488 00:24:17.609 } 00:24:17.609 ] 00:24:17.609 }' 00:24:17.609 18:39:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:17.609 18:39:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:17.609 18:39:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:17.609 18:39:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:17.609 18:39:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:17.609 18:39:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:17.609 18:39:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:17.609 18:39:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:17.609 18:39:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:17.609 18:39:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:17.609 18:39:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:17.609 18:39:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:17.609 18:39:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:17.609 18:39:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:17.609 18:39:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.609 18:39:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:17.869 18:39:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:17.869 "name": "raid_bdev1", 00:24:17.869 "uuid": "4e2432ae-3d04-4a48-8fc4-ea1c0898bfef", 00:24:17.869 "strip_size_kb": 0, 00:24:17.869 "state": "online", 00:24:17.869 "raid_level": "raid1", 00:24:17.869 "superblock": true, 00:24:17.869 "num_base_bdevs": 2, 00:24:17.869 "num_base_bdevs_discovered": 2, 00:24:17.869 "num_base_bdevs_operational": 2, 00:24:17.869 "base_bdevs_list": [ 00:24:17.869 { 00:24:17.869 "name": "spare", 00:24:17.869 "uuid": "44cae396-affd-54dc-b666-9f434e24e9c8", 00:24:17.869 "is_configured": true, 00:24:17.869 "data_offset": 2048, 00:24:17.869 "data_size": 63488 00:24:17.869 }, 00:24:17.869 { 00:24:17.869 "name": "BaseBdev2", 00:24:17.869 "uuid": "5889d7a0-592e-5d6e-afc8-402bcef83590", 00:24:17.869 "is_configured": true, 00:24:17.869 "data_offset": 2048, 00:24:17.869 "data_size": 63488 00:24:17.869 } 00:24:17.869 ] 00:24:17.869 }' 00:24:17.869 18:39:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:17.869 18:39:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:18.436 18:39:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:18.695 [2024-07-15 18:39:04.095986] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:18.695 [2024-07-15 18:39:04.096020] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:18.695 00:24:18.695 Latency(us) 00:24:18.695 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.695 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:24:18.695 raid_bdev1 : 12.13 87.16 261.47 0.00 0.00 15548.24 302.32 111848.11 00:24:18.695 =================================================================================================================== 00:24:18.695 Total : 87.16 261.47 0.00 0.00 15548.24 302.32 111848.11 00:24:18.695 [2024-07-15 18:39:04.200469] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:18.695 [2024-07-15 18:39:04.200498] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:18.695 [2024-07-15 18:39:04.200570] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:18.695 [2024-07-15 18:39:04.200580] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd53370 name raid_bdev1, state offline 00:24:18.695 0 00:24:18.695 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.695 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:24:18.953 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:24:18.953 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:24:18.953 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:18.953 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:24:18.953 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:18.953 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:24:18.954 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:18.954 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:18.954 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:18.954 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:24:18.954 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:18.954 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:18.954 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:24:19.235 /dev/nbd0 00:24:19.235 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:19.235 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:19.235 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:24:19.235 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:24:19.235 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:19.235 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:19.235 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:24:19.235 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:24:19.235 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:19.235 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:19.235 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:19.235 1+0 records in 00:24:19.235 1+0 records out 00:24:19.235 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178283 s, 23.0 MB/s 00:24:19.235 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:19.235 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:24:19.235 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:19.235 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:19.235 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:24:19.235 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:19.235 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:19.235 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:24:19.235 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:24:19.235 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:24:19.235 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:19.235 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:24:19.235 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:19.235 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:19.235 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:19.235 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:24:19.235 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:19.235 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:19.236 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:24:19.496 /dev/nbd1 00:24:19.496 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:19.496 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:19.496 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:24:19.496 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:24:19.496 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:19.496 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:19.496 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:24:19.496 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:24:19.496 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:19.496 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:19.496 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:19.496 1+0 records in 00:24:19.496 1+0 records out 00:24:19.496 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191307 s, 21.4 MB/s 00:24:19.496 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:19.496 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:24:19.496 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:19.496 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:19.496 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:24:19.496 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:19.496 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:19.496 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:19.496 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:24:19.496 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:19.496 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:19.496 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:19.496 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:24:19.496 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:19.496 18:39:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:19.756 18:39:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:19.756 18:39:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:19.756 18:39:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:19.756 18:39:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:19.756 18:39:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:19.756 18:39:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:19.756 18:39:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:24:19.756 18:39:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:19.756 18:39:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:19.756 18:39:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:19.756 18:39:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:19.756 18:39:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:19.756 18:39:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:24:19.756 18:39:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:19.756 18:39:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:20.014 18:39:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:20.014 18:39:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:20.014 18:39:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:20.014 18:39:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:20.014 18:39:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:20.014 18:39:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:20.014 18:39:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:24:20.014 18:39:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:20.014 18:39:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:24:20.014 18:39:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:20.273 18:39:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:20.531 [2024-07-15 18:39:06.003980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:20.531 [2024-07-15 18:39:06.004028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:20.531 [2024-07-15 18:39:06.004046] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xba4fd0 00:24:20.531 [2024-07-15 18:39:06.004056] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:20.531 [2024-07-15 18:39:06.005856] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:20.531 [2024-07-15 18:39:06.005885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:20.531 [2024-07-15 18:39:06.005978] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:20.531 [2024-07-15 18:39:06.006005] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:20.531 [2024-07-15 18:39:06.006112] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:20.531 spare 00:24:20.531 18:39:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:20.531 18:39:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:20.531 18:39:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:20.531 18:39:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:20.531 18:39:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:20.531 18:39:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:20.531 18:39:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:20.531 18:39:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:20.531 18:39:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:20.531 18:39:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:20.531 18:39:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:20.531 18:39:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.790 [2024-07-15 18:39:06.106431] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xbd0470 00:24:20.790 [2024-07-15 18:39:06.106454] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:20.790 [2024-07-15 18:39:06.106660] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xba5260 00:24:20.790 [2024-07-15 18:39:06.106826] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xbd0470 00:24:20.790 [2024-07-15 18:39:06.106835] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xbd0470 00:24:20.790 [2024-07-15 18:39:06.106960] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:20.790 18:39:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:20.790 "name": "raid_bdev1", 00:24:20.790 "uuid": "4e2432ae-3d04-4a48-8fc4-ea1c0898bfef", 00:24:20.790 "strip_size_kb": 0, 00:24:20.790 "state": "online", 00:24:20.790 "raid_level": "raid1", 00:24:20.790 "superblock": true, 00:24:20.790 "num_base_bdevs": 2, 00:24:20.790 "num_base_bdevs_discovered": 2, 00:24:20.790 "num_base_bdevs_operational": 2, 00:24:20.790 "base_bdevs_list": [ 00:24:20.790 { 00:24:20.790 "name": "spare", 00:24:20.790 "uuid": "44cae396-affd-54dc-b666-9f434e24e9c8", 00:24:20.790 "is_configured": true, 00:24:20.790 "data_offset": 2048, 00:24:20.790 "data_size": 63488 00:24:20.790 }, 00:24:20.790 { 00:24:20.790 "name": "BaseBdev2", 00:24:20.790 "uuid": "5889d7a0-592e-5d6e-afc8-402bcef83590", 00:24:20.790 "is_configured": true, 00:24:20.790 "data_offset": 2048, 00:24:20.790 "data_size": 63488 00:24:20.790 } 00:24:20.790 ] 00:24:20.790 }' 00:24:20.790 18:39:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:20.790 18:39:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:21.724 18:39:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:21.724 18:39:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:21.724 18:39:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:21.724 18:39:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:21.724 18:39:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:21.724 18:39:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.724 18:39:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:21.724 18:39:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:21.724 "name": "raid_bdev1", 00:24:21.724 "uuid": "4e2432ae-3d04-4a48-8fc4-ea1c0898bfef", 00:24:21.724 "strip_size_kb": 0, 00:24:21.724 "state": "online", 00:24:21.724 "raid_level": "raid1", 00:24:21.724 "superblock": true, 00:24:21.724 "num_base_bdevs": 2, 00:24:21.724 "num_base_bdevs_discovered": 2, 00:24:21.724 "num_base_bdevs_operational": 2, 00:24:21.724 "base_bdevs_list": [ 00:24:21.724 { 00:24:21.724 "name": "spare", 00:24:21.724 "uuid": "44cae396-affd-54dc-b666-9f434e24e9c8", 00:24:21.724 "is_configured": true, 00:24:21.724 "data_offset": 2048, 00:24:21.724 "data_size": 63488 00:24:21.724 }, 00:24:21.724 { 00:24:21.724 "name": "BaseBdev2", 00:24:21.724 "uuid": "5889d7a0-592e-5d6e-afc8-402bcef83590", 00:24:21.724 "is_configured": true, 00:24:21.724 "data_offset": 2048, 00:24:21.724 "data_size": 63488 00:24:21.724 } 00:24:21.724 ] 00:24:21.724 }' 00:24:21.724 18:39:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:21.724 18:39:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:21.724 18:39:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:21.983 18:39:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:21.983 18:39:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.983 18:39:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:22.241 18:39:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:24:22.241 18:39:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:22.241 [2024-07-15 18:39:07.793337] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:22.500 18:39:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:22.500 18:39:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:22.500 18:39:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:22.500 18:39:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:22.500 18:39:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:22.500 18:39:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:22.500 18:39:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:22.500 18:39:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:22.500 18:39:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:22.500 18:39:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:22.500 18:39:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:22.500 18:39:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.759 18:39:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:22.759 "name": "raid_bdev1", 00:24:22.759 "uuid": "4e2432ae-3d04-4a48-8fc4-ea1c0898bfef", 00:24:22.759 "strip_size_kb": 0, 00:24:22.759 "state": "online", 00:24:22.759 "raid_level": "raid1", 00:24:22.759 "superblock": true, 00:24:22.759 "num_base_bdevs": 2, 00:24:22.759 "num_base_bdevs_discovered": 1, 00:24:22.759 "num_base_bdevs_operational": 1, 00:24:22.759 "base_bdevs_list": [ 00:24:22.759 { 00:24:22.759 "name": null, 00:24:22.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.759 "is_configured": false, 00:24:22.759 "data_offset": 2048, 00:24:22.759 "data_size": 63488 00:24:22.759 }, 00:24:22.759 { 00:24:22.759 "name": "BaseBdev2", 00:24:22.759 "uuid": "5889d7a0-592e-5d6e-afc8-402bcef83590", 00:24:22.759 "is_configured": true, 00:24:22.759 "data_offset": 2048, 00:24:22.759 "data_size": 63488 00:24:22.759 } 00:24:22.759 ] 00:24:22.759 }' 00:24:22.759 18:39:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:22.759 18:39:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:23.326 18:39:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:23.585 [2024-07-15 18:39:08.924556] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:23.585 [2024-07-15 18:39:08.924709] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:23.585 [2024-07-15 18:39:08.924724] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:23.585 [2024-07-15 18:39:08.924749] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:23.585 [2024-07-15 18:39:08.929888] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd5c410 00:24:23.585 [2024-07-15 18:39:08.931878] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:23.585 18:39:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:24:24.522 18:39:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:24.522 18:39:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:24.522 18:39:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:24.522 18:39:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:24.522 18:39:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:24.522 18:39:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.522 18:39:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:24.781 18:39:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:24.781 "name": "raid_bdev1", 00:24:24.781 "uuid": "4e2432ae-3d04-4a48-8fc4-ea1c0898bfef", 00:24:24.781 "strip_size_kb": 0, 00:24:24.781 "state": "online", 00:24:24.781 "raid_level": "raid1", 00:24:24.781 "superblock": true, 00:24:24.781 "num_base_bdevs": 2, 00:24:24.781 "num_base_bdevs_discovered": 2, 00:24:24.781 "num_base_bdevs_operational": 2, 00:24:24.781 "process": { 00:24:24.781 "type": "rebuild", 00:24:24.781 "target": "spare", 00:24:24.781 "progress": { 00:24:24.781 "blocks": 22528, 00:24:24.781 "percent": 35 00:24:24.781 } 00:24:24.781 }, 00:24:24.781 "base_bdevs_list": [ 00:24:24.781 { 00:24:24.781 "name": "spare", 00:24:24.781 "uuid": "44cae396-affd-54dc-b666-9f434e24e9c8", 00:24:24.781 "is_configured": true, 00:24:24.781 "data_offset": 2048, 00:24:24.781 "data_size": 63488 00:24:24.781 }, 00:24:24.781 { 00:24:24.781 "name": "BaseBdev2", 00:24:24.781 "uuid": "5889d7a0-592e-5d6e-afc8-402bcef83590", 00:24:24.781 "is_configured": true, 00:24:24.781 "data_offset": 2048, 00:24:24.781 "data_size": 63488 00:24:24.781 } 00:24:24.781 ] 00:24:24.781 }' 00:24:24.781 18:39:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:24.781 18:39:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:24.781 18:39:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:24.781 18:39:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:24.781 18:39:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:25.348 [2024-07-15 18:39:10.712591] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:25.348 [2024-07-15 18:39:10.745477] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:25.348 [2024-07-15 18:39:10.745521] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:25.348 [2024-07-15 18:39:10.745535] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:25.348 [2024-07-15 18:39:10.745541] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:25.348 18:39:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:25.348 18:39:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:25.348 18:39:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:25.348 18:39:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:25.348 18:39:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:25.348 18:39:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:25.348 18:39:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:25.348 18:39:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:25.348 18:39:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:25.348 18:39:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:25.348 18:39:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:25.348 18:39:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:25.618 18:39:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:25.618 "name": "raid_bdev1", 00:24:25.618 "uuid": "4e2432ae-3d04-4a48-8fc4-ea1c0898bfef", 00:24:25.618 "strip_size_kb": 0, 00:24:25.618 "state": "online", 00:24:25.618 "raid_level": "raid1", 00:24:25.618 "superblock": true, 00:24:25.618 "num_base_bdevs": 2, 00:24:25.618 "num_base_bdevs_discovered": 1, 00:24:25.618 "num_base_bdevs_operational": 1, 00:24:25.618 "base_bdevs_list": [ 00:24:25.618 { 00:24:25.618 "name": null, 00:24:25.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.618 "is_configured": false, 00:24:25.618 "data_offset": 2048, 00:24:25.618 "data_size": 63488 00:24:25.618 }, 00:24:25.618 { 00:24:25.618 "name": "BaseBdev2", 00:24:25.618 "uuid": "5889d7a0-592e-5d6e-afc8-402bcef83590", 00:24:25.618 "is_configured": true, 00:24:25.618 "data_offset": 2048, 00:24:25.618 "data_size": 63488 00:24:25.618 } 00:24:25.618 ] 00:24:25.618 }' 00:24:25.618 18:39:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:25.618 18:39:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:26.186 18:39:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:26.753 [2024-07-15 18:39:12.134007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:26.753 [2024-07-15 18:39:12.134057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:26.753 [2024-07-15 18:39:12.134077] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbd07f0 00:24:26.753 [2024-07-15 18:39:12.134086] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:26.753 [2024-07-15 18:39:12.134467] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:26.753 [2024-07-15 18:39:12.134483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:26.753 [2024-07-15 18:39:12.134565] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:26.753 [2024-07-15 18:39:12.134575] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:26.753 [2024-07-15 18:39:12.134583] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:26.753 [2024-07-15 18:39:12.134598] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:26.753 [2024-07-15 18:39:12.139697] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xba9e00 00:24:26.753 spare 00:24:26.753 [2024-07-15 18:39:12.141201] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:26.753 18:39:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:24:27.687 18:39:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:27.687 18:39:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:27.687 18:39:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:27.687 18:39:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:27.687 18:39:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:27.687 18:39:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.687 18:39:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:27.944 18:39:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:27.944 "name": "raid_bdev1", 00:24:27.944 "uuid": "4e2432ae-3d04-4a48-8fc4-ea1c0898bfef", 00:24:27.944 "strip_size_kb": 0, 00:24:27.944 "state": "online", 00:24:27.944 "raid_level": "raid1", 00:24:27.944 "superblock": true, 00:24:27.944 "num_base_bdevs": 2, 00:24:27.944 "num_base_bdevs_discovered": 2, 00:24:27.944 "num_base_bdevs_operational": 2, 00:24:27.944 "process": { 00:24:27.944 "type": "rebuild", 00:24:27.944 "target": "spare", 00:24:27.944 "progress": { 00:24:27.944 "blocks": 24576, 00:24:27.944 "percent": 38 00:24:27.944 } 00:24:27.944 }, 00:24:27.944 "base_bdevs_list": [ 00:24:27.944 { 00:24:27.944 "name": "spare", 00:24:27.944 "uuid": "44cae396-affd-54dc-b666-9f434e24e9c8", 00:24:27.944 "is_configured": true, 00:24:27.944 "data_offset": 2048, 00:24:27.944 "data_size": 63488 00:24:27.944 }, 00:24:27.944 { 00:24:27.944 "name": "BaseBdev2", 00:24:27.944 "uuid": "5889d7a0-592e-5d6e-afc8-402bcef83590", 00:24:27.944 "is_configured": true, 00:24:27.944 "data_offset": 2048, 00:24:27.944 "data_size": 63488 00:24:27.944 } 00:24:27.944 ] 00:24:27.944 }' 00:24:27.944 18:39:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:27.944 18:39:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:27.944 18:39:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:28.202 18:39:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:28.202 18:39:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:28.202 [2024-07-15 18:39:13.753112] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:28.202 [2024-07-15 18:39:13.753477] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:28.202 [2024-07-15 18:39:13.753516] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:28.202 [2024-07-15 18:39:13.753530] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:28.202 [2024-07-15 18:39:13.753536] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:28.460 18:39:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:28.460 18:39:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:28.460 18:39:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:28.460 18:39:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:28.460 18:39:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:28.460 18:39:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:28.460 18:39:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:28.460 18:39:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:28.460 18:39:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:28.460 18:39:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:28.460 18:39:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.460 18:39:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:28.717 18:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:28.717 "name": "raid_bdev1", 00:24:28.717 "uuid": "4e2432ae-3d04-4a48-8fc4-ea1c0898bfef", 00:24:28.717 "strip_size_kb": 0, 00:24:28.717 "state": "online", 00:24:28.717 "raid_level": "raid1", 00:24:28.717 "superblock": true, 00:24:28.717 "num_base_bdevs": 2, 00:24:28.717 "num_base_bdevs_discovered": 1, 00:24:28.717 "num_base_bdevs_operational": 1, 00:24:28.717 "base_bdevs_list": [ 00:24:28.717 { 00:24:28.717 "name": null, 00:24:28.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.717 "is_configured": false, 00:24:28.717 "data_offset": 2048, 00:24:28.717 "data_size": 63488 00:24:28.717 }, 00:24:28.717 { 00:24:28.717 "name": "BaseBdev2", 00:24:28.717 "uuid": "5889d7a0-592e-5d6e-afc8-402bcef83590", 00:24:28.717 "is_configured": true, 00:24:28.717 "data_offset": 2048, 00:24:28.717 "data_size": 63488 00:24:28.717 } 00:24:28.717 ] 00:24:28.717 }' 00:24:28.717 18:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:28.717 18:39:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:29.283 18:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:29.283 18:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:29.284 18:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:29.284 18:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:29.284 18:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:29.284 18:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:29.284 18:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:29.542 18:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:29.542 "name": "raid_bdev1", 00:24:29.542 "uuid": "4e2432ae-3d04-4a48-8fc4-ea1c0898bfef", 00:24:29.542 "strip_size_kb": 0, 00:24:29.542 "state": "online", 00:24:29.542 "raid_level": "raid1", 00:24:29.542 "superblock": true, 00:24:29.542 "num_base_bdevs": 2, 00:24:29.542 "num_base_bdevs_discovered": 1, 00:24:29.542 "num_base_bdevs_operational": 1, 00:24:29.542 "base_bdevs_list": [ 00:24:29.542 { 00:24:29.542 "name": null, 00:24:29.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:29.542 "is_configured": false, 00:24:29.542 "data_offset": 2048, 00:24:29.542 "data_size": 63488 00:24:29.542 }, 00:24:29.542 { 00:24:29.542 "name": "BaseBdev2", 00:24:29.542 "uuid": "5889d7a0-592e-5d6e-afc8-402bcef83590", 00:24:29.542 "is_configured": true, 00:24:29.542 "data_offset": 2048, 00:24:29.542 "data_size": 63488 00:24:29.542 } 00:24:29.542 ] 00:24:29.542 }' 00:24:29.542 18:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:29.542 18:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:29.542 18:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:29.542 18:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:29.542 18:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:29.800 18:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:30.369 [2024-07-15 18:39:15.739763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:30.369 [2024-07-15 18:39:15.739812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:30.369 [2024-07-15 18:39:15.739831] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbaa360 00:24:30.369 [2024-07-15 18:39:15.739840] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:30.369 [2024-07-15 18:39:15.740201] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:30.369 [2024-07-15 18:39:15.740217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:30.369 [2024-07-15 18:39:15.740283] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:30.369 [2024-07-15 18:39:15.740292] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:30.369 [2024-07-15 18:39:15.740299] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:30.369 BaseBdev1 00:24:30.369 18:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:24:31.389 18:39:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:31.389 18:39:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:31.389 18:39:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:31.389 18:39:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:31.389 18:39:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:31.389 18:39:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:31.389 18:39:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:31.389 18:39:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:31.389 18:39:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:31.389 18:39:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:31.389 18:39:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.389 18:39:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:31.648 18:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:31.648 "name": "raid_bdev1", 00:24:31.648 "uuid": "4e2432ae-3d04-4a48-8fc4-ea1c0898bfef", 00:24:31.648 "strip_size_kb": 0, 00:24:31.648 "state": "online", 00:24:31.648 "raid_level": "raid1", 00:24:31.648 "superblock": true, 00:24:31.648 "num_base_bdevs": 2, 00:24:31.648 "num_base_bdevs_discovered": 1, 00:24:31.648 "num_base_bdevs_operational": 1, 00:24:31.648 "base_bdevs_list": [ 00:24:31.648 { 00:24:31.648 "name": null, 00:24:31.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.648 "is_configured": false, 00:24:31.648 "data_offset": 2048, 00:24:31.648 "data_size": 63488 00:24:31.648 }, 00:24:31.648 { 00:24:31.648 "name": "BaseBdev2", 00:24:31.648 "uuid": "5889d7a0-592e-5d6e-afc8-402bcef83590", 00:24:31.648 "is_configured": true, 00:24:31.648 "data_offset": 2048, 00:24:31.648 "data_size": 63488 00:24:31.648 } 00:24:31.648 ] 00:24:31.648 }' 00:24:31.648 18:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:31.648 18:39:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:32.214 18:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:32.214 18:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:32.214 18:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:32.214 18:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:32.214 18:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:32.214 18:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.214 18:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.474 18:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:32.474 "name": "raid_bdev1", 00:24:32.474 "uuid": "4e2432ae-3d04-4a48-8fc4-ea1c0898bfef", 00:24:32.474 "strip_size_kb": 0, 00:24:32.474 "state": "online", 00:24:32.474 "raid_level": "raid1", 00:24:32.474 "superblock": true, 00:24:32.474 "num_base_bdevs": 2, 00:24:32.474 "num_base_bdevs_discovered": 1, 00:24:32.475 "num_base_bdevs_operational": 1, 00:24:32.475 "base_bdevs_list": [ 00:24:32.475 { 00:24:32.475 "name": null, 00:24:32.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:32.475 "is_configured": false, 00:24:32.475 "data_offset": 2048, 00:24:32.475 "data_size": 63488 00:24:32.475 }, 00:24:32.475 { 00:24:32.475 "name": "BaseBdev2", 00:24:32.475 "uuid": "5889d7a0-592e-5d6e-afc8-402bcef83590", 00:24:32.475 "is_configured": true, 00:24:32.475 "data_offset": 2048, 00:24:32.475 "data_size": 63488 00:24:32.475 } 00:24:32.475 ] 00:24:32.475 }' 00:24:32.475 18:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:32.475 18:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:32.475 18:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:32.475 18:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:32.475 18:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:32.475 18:39:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:24:32.475 18:39:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:32.475 18:39:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:32.475 18:39:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:32.475 18:39:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:32.475 18:39:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:32.475 18:39:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:32.475 18:39:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:32.475 18:39:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:32.475 18:39:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:24:32.475 18:39:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:32.734 [2024-07-15 18:39:18.162649] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:32.734 [2024-07-15 18:39:18.162769] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:32.734 [2024-07-15 18:39:18.162783] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:32.734 request: 00:24:32.734 { 00:24:32.734 "base_bdev": "BaseBdev1", 00:24:32.734 "raid_bdev": "raid_bdev1", 00:24:32.734 "method": "bdev_raid_add_base_bdev", 00:24:32.734 "req_id": 1 00:24:32.734 } 00:24:32.734 Got JSON-RPC error response 00:24:32.734 response: 00:24:32.734 { 00:24:32.734 "code": -22, 00:24:32.734 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:32.734 } 00:24:32.734 18:39:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:24:32.734 18:39:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:32.734 18:39:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:32.734 18:39:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:32.734 18:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:24:33.670 18:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:33.670 18:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:33.670 18:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:33.670 18:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:33.670 18:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:33.670 18:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:33.670 18:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:33.670 18:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:33.670 18:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:33.670 18:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:33.670 18:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.670 18:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:33.929 18:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:33.929 "name": "raid_bdev1", 00:24:33.929 "uuid": "4e2432ae-3d04-4a48-8fc4-ea1c0898bfef", 00:24:33.929 "strip_size_kb": 0, 00:24:33.929 "state": "online", 00:24:33.929 "raid_level": "raid1", 00:24:33.929 "superblock": true, 00:24:33.929 "num_base_bdevs": 2, 00:24:33.929 "num_base_bdevs_discovered": 1, 00:24:33.929 "num_base_bdevs_operational": 1, 00:24:33.929 "base_bdevs_list": [ 00:24:33.929 { 00:24:33.929 "name": null, 00:24:33.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.929 "is_configured": false, 00:24:33.929 "data_offset": 2048, 00:24:33.929 "data_size": 63488 00:24:33.929 }, 00:24:33.929 { 00:24:33.929 "name": "BaseBdev2", 00:24:33.929 "uuid": "5889d7a0-592e-5d6e-afc8-402bcef83590", 00:24:33.929 "is_configured": true, 00:24:33.929 "data_offset": 2048, 00:24:33.929 "data_size": 63488 00:24:33.929 } 00:24:33.929 ] 00:24:33.929 }' 00:24:33.929 18:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:33.929 18:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:34.866 18:39:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:34.866 18:39:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:34.866 18:39:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:34.866 18:39:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:34.866 18:39:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:34.866 18:39:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:34.866 18:39:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:34.866 18:39:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:34.866 "name": "raid_bdev1", 00:24:34.866 "uuid": "4e2432ae-3d04-4a48-8fc4-ea1c0898bfef", 00:24:34.866 "strip_size_kb": 0, 00:24:34.866 "state": "online", 00:24:34.866 "raid_level": "raid1", 00:24:34.866 "superblock": true, 00:24:34.866 "num_base_bdevs": 2, 00:24:34.866 "num_base_bdevs_discovered": 1, 00:24:34.866 "num_base_bdevs_operational": 1, 00:24:34.866 "base_bdevs_list": [ 00:24:34.866 { 00:24:34.866 "name": null, 00:24:34.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:34.866 "is_configured": false, 00:24:34.866 "data_offset": 2048, 00:24:34.866 "data_size": 63488 00:24:34.866 }, 00:24:34.866 { 00:24:34.866 "name": "BaseBdev2", 00:24:34.866 "uuid": "5889d7a0-592e-5d6e-afc8-402bcef83590", 00:24:34.866 "is_configured": true, 00:24:34.866 "data_offset": 2048, 00:24:34.866 "data_size": 63488 00:24:34.866 } 00:24:34.866 ] 00:24:34.866 }' 00:24:34.866 18:39:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:34.866 18:39:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:34.866 18:39:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:34.866 18:39:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:34.866 18:39:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 2902382 00:24:34.866 18:39:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@948 -- # '[' -z 2902382 ']' 00:24:34.866 18:39:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # kill -0 2902382 00:24:34.866 18:39:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # uname 00:24:34.866 18:39:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:35.126 18:39:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2902382 00:24:35.126 18:39:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:35.126 18:39:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:35.126 18:39:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2902382' 00:24:35.126 killing process with pid 2902382 00:24:35.126 18:39:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@967 -- # kill 2902382 00:24:35.126 Received shutdown signal, test time was about 28.345696 seconds 00:24:35.126 00:24:35.126 Latency(us) 00:24:35.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.126 =================================================================================================================== 00:24:35.126 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:35.126 [2024-07-15 18:39:20.454210] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:35.126 [2024-07-15 18:39:20.454308] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:35.126 [2024-07-15 18:39:20.454349] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:35.126 [2024-07-15 18:39:20.454360] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbd0470 name raid_bdev1, state offline 00:24:35.126 18:39:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # wait 2902382 00:24:35.126 [2024-07-15 18:39:20.475128] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:24:35.385 00:24:35.385 real 0m32.678s 00:24:35.385 user 0m52.592s 00:24:35.385 sys 0m3.672s 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:35.385 ************************************ 00:24:35.385 END TEST raid_rebuild_test_sb_io 00:24:35.385 ************************************ 00:24:35.385 18:39:20 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:24:35.385 18:39:20 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:24:35.385 18:39:20 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:24:35.385 18:39:20 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:24:35.385 18:39:20 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:35.385 18:39:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:35.385 ************************************ 00:24:35.385 START TEST raid_rebuild_test 00:24:35.385 ************************************ 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 false false true 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=2908309 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 2908309 /var/tmp/spdk-raid.sock 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 2908309 ']' 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:35.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:35.385 18:39:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:35.385 [2024-07-15 18:39:20.791460] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:24:35.385 [2024-07-15 18:39:20.791523] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2908309 ] 00:24:35.385 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:35.385 Zero copy mechanism will not be used. 00:24:35.385 [2024-07-15 18:39:20.890895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.644 [2024-07-15 18:39:20.987013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.644 [2024-07-15 18:39:21.047216] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:35.644 [2024-07-15 18:39:21.047250] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:36.212 18:39:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:36.212 18:39:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:24:36.212 18:39:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:36.212 18:39:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:36.471 BaseBdev1_malloc 00:24:36.471 18:39:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:36.730 [2024-07-15 18:39:22.240460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:36.730 [2024-07-15 18:39:22.240507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:36.730 [2024-07-15 18:39:22.240525] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xadc130 00:24:36.730 [2024-07-15 18:39:22.240535] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:36.730 [2024-07-15 18:39:22.242133] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:36.730 [2024-07-15 18:39:22.242160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:36.730 BaseBdev1 00:24:36.730 18:39:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:36.730 18:39:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:36.989 BaseBdev2_malloc 00:24:36.989 18:39:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:37.248 [2024-07-15 18:39:22.758284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:37.248 [2024-07-15 18:39:22.758324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:37.248 [2024-07-15 18:39:22.758339] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc81fa0 00:24:37.248 [2024-07-15 18:39:22.758349] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:37.248 [2024-07-15 18:39:22.759779] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:37.248 [2024-07-15 18:39:22.759804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:37.248 BaseBdev2 00:24:37.248 18:39:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:37.248 18:39:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:37.507 BaseBdev3_malloc 00:24:37.507 18:39:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:37.766 [2024-07-15 18:39:23.283983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:37.766 [2024-07-15 18:39:23.284021] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:37.766 [2024-07-15 18:39:23.284036] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc8d970 00:24:37.766 [2024-07-15 18:39:23.284046] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:37.766 [2024-07-15 18:39:23.285480] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:37.766 [2024-07-15 18:39:23.285506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:37.766 BaseBdev3 00:24:37.766 18:39:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:37.766 18:39:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:38.023 BaseBdev4_malloc 00:24:38.024 18:39:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:38.282 [2024-07-15 18:39:23.797791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:38.282 [2024-07-15 18:39:23.797837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:38.282 [2024-07-15 18:39:23.797856] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc848c0 00:24:38.282 [2024-07-15 18:39:23.797866] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:38.282 [2024-07-15 18:39:23.799401] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:38.282 [2024-07-15 18:39:23.799429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:38.282 BaseBdev4 00:24:38.282 18:39:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:38.541 spare_malloc 00:24:38.541 18:39:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:38.800 spare_delay 00:24:38.800 18:39:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:39.059 [2024-07-15 18:39:24.576232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:39.059 [2024-07-15 18:39:24.576270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:39.059 [2024-07-15 18:39:24.576286] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xad4bf0 00:24:39.059 [2024-07-15 18:39:24.576295] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:39.059 [2024-07-15 18:39:24.577785] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:39.059 [2024-07-15 18:39:24.577809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:39.059 spare 00:24:39.059 18:39:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:24:39.318 [2024-07-15 18:39:24.828932] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:39.318 [2024-07-15 18:39:24.830164] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:39.318 [2024-07-15 18:39:24.830219] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:39.318 [2024-07-15 18:39:24.830263] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:39.318 [2024-07-15 18:39:24.830345] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xad6990 00:24:39.318 [2024-07-15 18:39:24.830358] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:24:39.318 [2024-07-15 18:39:24.830553] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xada1f0 00:24:39.319 [2024-07-15 18:39:24.830697] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xad6990 00:24:39.319 [2024-07-15 18:39:24.830706] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xad6990 00:24:39.319 [2024-07-15 18:39:24.830813] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:39.319 18:39:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:39.319 18:39:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:39.319 18:39:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:39.319 18:39:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:39.319 18:39:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:39.319 18:39:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:39.319 18:39:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:39.319 18:39:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:39.319 18:39:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:39.319 18:39:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:39.319 18:39:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.319 18:39:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.578 18:39:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:39.578 "name": "raid_bdev1", 00:24:39.578 "uuid": "d864624d-5c19-4de1-9969-98e29b0a307c", 00:24:39.578 "strip_size_kb": 0, 00:24:39.578 "state": "online", 00:24:39.578 "raid_level": "raid1", 00:24:39.578 "superblock": false, 00:24:39.578 "num_base_bdevs": 4, 00:24:39.578 "num_base_bdevs_discovered": 4, 00:24:39.578 "num_base_bdevs_operational": 4, 00:24:39.578 "base_bdevs_list": [ 00:24:39.578 { 00:24:39.578 "name": "BaseBdev1", 00:24:39.578 "uuid": "d728c8e7-759b-5389-8317-fc0c8ec9ec86", 00:24:39.578 "is_configured": true, 00:24:39.578 "data_offset": 0, 00:24:39.578 "data_size": 65536 00:24:39.578 }, 00:24:39.578 { 00:24:39.578 "name": "BaseBdev2", 00:24:39.578 "uuid": "677de884-9694-526f-b1ab-a9b9839b7fe4", 00:24:39.578 "is_configured": true, 00:24:39.578 "data_offset": 0, 00:24:39.578 "data_size": 65536 00:24:39.578 }, 00:24:39.578 { 00:24:39.578 "name": "BaseBdev3", 00:24:39.578 "uuid": "23b009e7-2a19-584d-b5c2-26e94dadcdb0", 00:24:39.578 "is_configured": true, 00:24:39.578 "data_offset": 0, 00:24:39.578 "data_size": 65536 00:24:39.578 }, 00:24:39.578 { 00:24:39.578 "name": "BaseBdev4", 00:24:39.578 "uuid": "a5525e38-30e4-525a-a09b-b34658b94497", 00:24:39.578 "is_configured": true, 00:24:39.578 "data_offset": 0, 00:24:39.578 "data_size": 65536 00:24:39.578 } 00:24:39.578 ] 00:24:39.578 }' 00:24:39.578 18:39:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:39.578 18:39:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:40.513 18:39:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:40.513 18:39:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:24:40.513 [2024-07-15 18:39:25.976337] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:40.513 18:39:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:24:40.513 18:39:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.513 18:39:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:40.773 18:39:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:24:40.773 18:39:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:24:40.773 18:39:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:24:40.773 18:39:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:24:40.773 18:39:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:40.773 18:39:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:40.773 18:39:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:40.773 18:39:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:40.773 18:39:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:40.773 18:39:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:40.773 18:39:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:24:40.773 18:39:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:40.773 18:39:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:40.773 18:39:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:41.032 [2024-07-15 18:39:26.497443] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xadb550 00:24:41.032 /dev/nbd0 00:24:41.032 18:39:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:41.032 18:39:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:41.032 18:39:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:24:41.032 18:39:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:24:41.032 18:39:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:41.032 18:39:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:41.032 18:39:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:24:41.032 18:39:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:24:41.032 18:39:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:41.032 18:39:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:41.032 18:39:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:41.032 1+0 records in 00:24:41.032 1+0 records out 00:24:41.032 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222245 s, 18.4 MB/s 00:24:41.032 18:39:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:41.032 18:39:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:24:41.032 18:39:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:41.032 18:39:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:41.032 18:39:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:24:41.032 18:39:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:41.032 18:39:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:41.032 18:39:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:24:41.032 18:39:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:24:41.032 18:39:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:24:49.154 65536+0 records in 00:24:49.154 65536+0 records out 00:24:49.154 33554432 bytes (34 MB, 32 MiB) copied, 7.26797 s, 4.6 MB/s 00:24:49.154 18:39:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:49.154 18:39:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:49.154 18:39:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:49.154 18:39:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:49.154 18:39:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:24:49.154 18:39:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:49.154 18:39:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:49.154 [2024-07-15 18:39:34.099647] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:49.154 18:39:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:49.154 18:39:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:49.154 18:39:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:49.154 18:39:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:49.154 18:39:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:49.154 18:39:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:49.154 18:39:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:24:49.154 18:39:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:24:49.154 18:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:49.154 [2024-07-15 18:39:34.344358] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:49.154 18:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:49.154 18:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:49.154 18:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:49.154 18:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:49.154 18:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:49.155 18:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:49.155 18:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:49.155 18:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:49.155 18:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:49.155 18:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:49.155 18:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:49.155 18:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:49.155 18:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:49.155 "name": "raid_bdev1", 00:24:49.155 "uuid": "d864624d-5c19-4de1-9969-98e29b0a307c", 00:24:49.155 "strip_size_kb": 0, 00:24:49.155 "state": "online", 00:24:49.155 "raid_level": "raid1", 00:24:49.155 "superblock": false, 00:24:49.155 "num_base_bdevs": 4, 00:24:49.155 "num_base_bdevs_discovered": 3, 00:24:49.155 "num_base_bdevs_operational": 3, 00:24:49.155 "base_bdevs_list": [ 00:24:49.155 { 00:24:49.155 "name": null, 00:24:49.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.155 "is_configured": false, 00:24:49.155 "data_offset": 0, 00:24:49.155 "data_size": 65536 00:24:49.155 }, 00:24:49.155 { 00:24:49.155 "name": "BaseBdev2", 00:24:49.155 "uuid": "677de884-9694-526f-b1ab-a9b9839b7fe4", 00:24:49.155 "is_configured": true, 00:24:49.155 "data_offset": 0, 00:24:49.155 "data_size": 65536 00:24:49.155 }, 00:24:49.155 { 00:24:49.155 "name": "BaseBdev3", 00:24:49.155 "uuid": "23b009e7-2a19-584d-b5c2-26e94dadcdb0", 00:24:49.155 "is_configured": true, 00:24:49.155 "data_offset": 0, 00:24:49.155 "data_size": 65536 00:24:49.155 }, 00:24:49.155 { 00:24:49.155 "name": "BaseBdev4", 00:24:49.155 "uuid": "a5525e38-30e4-525a-a09b-b34658b94497", 00:24:49.155 "is_configured": true, 00:24:49.155 "data_offset": 0, 00:24:49.155 "data_size": 65536 00:24:49.155 } 00:24:49.155 ] 00:24:49.155 }' 00:24:49.155 18:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:49.155 18:39:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.723 18:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:49.982 [2024-07-15 18:39:35.491470] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:49.982 [2024-07-15 18:39:35.495480] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xad7030 00:24:49.982 [2024-07-15 18:39:35.497592] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:49.982 18:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:24:51.356 18:39:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:51.356 18:39:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:51.356 18:39:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:51.356 18:39:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:51.356 18:39:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:51.356 18:39:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.356 18:39:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.356 18:39:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:51.356 "name": "raid_bdev1", 00:24:51.356 "uuid": "d864624d-5c19-4de1-9969-98e29b0a307c", 00:24:51.356 "strip_size_kb": 0, 00:24:51.356 "state": "online", 00:24:51.356 "raid_level": "raid1", 00:24:51.356 "superblock": false, 00:24:51.356 "num_base_bdevs": 4, 00:24:51.356 "num_base_bdevs_discovered": 4, 00:24:51.356 "num_base_bdevs_operational": 4, 00:24:51.356 "process": { 00:24:51.356 "type": "rebuild", 00:24:51.356 "target": "spare", 00:24:51.356 "progress": { 00:24:51.356 "blocks": 24576, 00:24:51.356 "percent": 37 00:24:51.356 } 00:24:51.356 }, 00:24:51.356 "base_bdevs_list": [ 00:24:51.356 { 00:24:51.356 "name": "spare", 00:24:51.356 "uuid": "28ab752d-1288-5038-b370-a7cd633e2182", 00:24:51.356 "is_configured": true, 00:24:51.356 "data_offset": 0, 00:24:51.356 "data_size": 65536 00:24:51.356 }, 00:24:51.356 { 00:24:51.356 "name": "BaseBdev2", 00:24:51.356 "uuid": "677de884-9694-526f-b1ab-a9b9839b7fe4", 00:24:51.356 "is_configured": true, 00:24:51.356 "data_offset": 0, 00:24:51.356 "data_size": 65536 00:24:51.356 }, 00:24:51.356 { 00:24:51.356 "name": "BaseBdev3", 00:24:51.356 "uuid": "23b009e7-2a19-584d-b5c2-26e94dadcdb0", 00:24:51.356 "is_configured": true, 00:24:51.356 "data_offset": 0, 00:24:51.356 "data_size": 65536 00:24:51.356 }, 00:24:51.356 { 00:24:51.356 "name": "BaseBdev4", 00:24:51.356 "uuid": "a5525e38-30e4-525a-a09b-b34658b94497", 00:24:51.356 "is_configured": true, 00:24:51.356 "data_offset": 0, 00:24:51.356 "data_size": 65536 00:24:51.356 } 00:24:51.356 ] 00:24:51.356 }' 00:24:51.356 18:39:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:51.357 18:39:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:51.357 18:39:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:51.357 18:39:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:51.357 18:39:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:51.659 [2024-07-15 18:39:37.116839] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:51.917 [2024-07-15 18:39:37.210400] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:51.917 [2024-07-15 18:39:37.210445] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:51.917 [2024-07-15 18:39:37.210462] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:51.917 [2024-07-15 18:39:37.210469] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:51.917 18:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:51.917 18:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:51.917 18:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:51.917 18:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:51.917 18:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:51.917 18:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:51.917 18:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:51.917 18:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:51.917 18:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:51.917 18:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:51.917 18:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.917 18:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:52.174 18:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:52.174 "name": "raid_bdev1", 00:24:52.174 "uuid": "d864624d-5c19-4de1-9969-98e29b0a307c", 00:24:52.174 "strip_size_kb": 0, 00:24:52.174 "state": "online", 00:24:52.174 "raid_level": "raid1", 00:24:52.174 "superblock": false, 00:24:52.174 "num_base_bdevs": 4, 00:24:52.174 "num_base_bdevs_discovered": 3, 00:24:52.174 "num_base_bdevs_operational": 3, 00:24:52.174 "base_bdevs_list": [ 00:24:52.174 { 00:24:52.174 "name": null, 00:24:52.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.175 "is_configured": false, 00:24:52.175 "data_offset": 0, 00:24:52.175 "data_size": 65536 00:24:52.175 }, 00:24:52.175 { 00:24:52.175 "name": "BaseBdev2", 00:24:52.175 "uuid": "677de884-9694-526f-b1ab-a9b9839b7fe4", 00:24:52.175 "is_configured": true, 00:24:52.175 "data_offset": 0, 00:24:52.175 "data_size": 65536 00:24:52.175 }, 00:24:52.175 { 00:24:52.175 "name": "BaseBdev3", 00:24:52.175 "uuid": "23b009e7-2a19-584d-b5c2-26e94dadcdb0", 00:24:52.175 "is_configured": true, 00:24:52.175 "data_offset": 0, 00:24:52.175 "data_size": 65536 00:24:52.175 }, 00:24:52.175 { 00:24:52.175 "name": "BaseBdev4", 00:24:52.175 "uuid": "a5525e38-30e4-525a-a09b-b34658b94497", 00:24:52.175 "is_configured": true, 00:24:52.175 "data_offset": 0, 00:24:52.175 "data_size": 65536 00:24:52.175 } 00:24:52.175 ] 00:24:52.175 }' 00:24:52.175 18:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:52.175 18:39:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:52.740 18:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:52.740 18:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:52.740 18:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:52.740 18:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:52.740 18:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:52.740 18:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:52.740 18:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:52.998 18:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:52.998 "name": "raid_bdev1", 00:24:52.998 "uuid": "d864624d-5c19-4de1-9969-98e29b0a307c", 00:24:52.998 "strip_size_kb": 0, 00:24:52.998 "state": "online", 00:24:52.998 "raid_level": "raid1", 00:24:52.998 "superblock": false, 00:24:52.998 "num_base_bdevs": 4, 00:24:52.998 "num_base_bdevs_discovered": 3, 00:24:52.998 "num_base_bdevs_operational": 3, 00:24:52.998 "base_bdevs_list": [ 00:24:52.998 { 00:24:52.998 "name": null, 00:24:52.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.998 "is_configured": false, 00:24:52.998 "data_offset": 0, 00:24:52.998 "data_size": 65536 00:24:52.998 }, 00:24:52.998 { 00:24:52.998 "name": "BaseBdev2", 00:24:52.998 "uuid": "677de884-9694-526f-b1ab-a9b9839b7fe4", 00:24:52.998 "is_configured": true, 00:24:52.998 "data_offset": 0, 00:24:52.998 "data_size": 65536 00:24:52.998 }, 00:24:52.998 { 00:24:52.998 "name": "BaseBdev3", 00:24:52.998 "uuid": "23b009e7-2a19-584d-b5c2-26e94dadcdb0", 00:24:52.998 "is_configured": true, 00:24:52.998 "data_offset": 0, 00:24:52.998 "data_size": 65536 00:24:52.998 }, 00:24:52.998 { 00:24:52.998 "name": "BaseBdev4", 00:24:52.998 "uuid": "a5525e38-30e4-525a-a09b-b34658b94497", 00:24:52.998 "is_configured": true, 00:24:52.998 "data_offset": 0, 00:24:52.998 "data_size": 65536 00:24:52.998 } 00:24:52.998 ] 00:24:52.998 }' 00:24:52.998 18:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:52.998 18:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:52.998 18:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:52.998 18:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:52.998 18:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:53.256 [2024-07-15 18:39:38.686321] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:53.256 [2024-07-15 18:39:38.690323] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xadb5d0 00:24:53.257 [2024-07-15 18:39:38.691876] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:53.257 18:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:54.192 18:39:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:54.192 18:39:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:54.192 18:39:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:54.192 18:39:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:54.192 18:39:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:54.192 18:39:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.192 18:39:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:54.451 18:39:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:54.451 "name": "raid_bdev1", 00:24:54.451 "uuid": "d864624d-5c19-4de1-9969-98e29b0a307c", 00:24:54.451 "strip_size_kb": 0, 00:24:54.451 "state": "online", 00:24:54.451 "raid_level": "raid1", 00:24:54.451 "superblock": false, 00:24:54.451 "num_base_bdevs": 4, 00:24:54.451 "num_base_bdevs_discovered": 4, 00:24:54.451 "num_base_bdevs_operational": 4, 00:24:54.451 "process": { 00:24:54.451 "type": "rebuild", 00:24:54.451 "target": "spare", 00:24:54.451 "progress": { 00:24:54.451 "blocks": 24576, 00:24:54.451 "percent": 37 00:24:54.451 } 00:24:54.451 }, 00:24:54.451 "base_bdevs_list": [ 00:24:54.451 { 00:24:54.451 "name": "spare", 00:24:54.451 "uuid": "28ab752d-1288-5038-b370-a7cd633e2182", 00:24:54.451 "is_configured": true, 00:24:54.452 "data_offset": 0, 00:24:54.452 "data_size": 65536 00:24:54.452 }, 00:24:54.452 { 00:24:54.452 "name": "BaseBdev2", 00:24:54.452 "uuid": "677de884-9694-526f-b1ab-a9b9839b7fe4", 00:24:54.452 "is_configured": true, 00:24:54.452 "data_offset": 0, 00:24:54.452 "data_size": 65536 00:24:54.452 }, 00:24:54.452 { 00:24:54.452 "name": "BaseBdev3", 00:24:54.452 "uuid": "23b009e7-2a19-584d-b5c2-26e94dadcdb0", 00:24:54.452 "is_configured": true, 00:24:54.452 "data_offset": 0, 00:24:54.452 "data_size": 65536 00:24:54.452 }, 00:24:54.452 { 00:24:54.452 "name": "BaseBdev4", 00:24:54.452 "uuid": "a5525e38-30e4-525a-a09b-b34658b94497", 00:24:54.452 "is_configured": true, 00:24:54.452 "data_offset": 0, 00:24:54.452 "data_size": 65536 00:24:54.452 } 00:24:54.452 ] 00:24:54.452 }' 00:24:54.452 18:39:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:54.710 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:54.710 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:54.710 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:54.710 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:24:54.710 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:24:54.710 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:24:54.710 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:24:54.710 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:54.970 [2024-07-15 18:39:40.300063] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:54.970 [2024-07-15 18:39:40.303876] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0xadb5d0 00:24:54.970 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:24:54.970 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:24:54.970 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:54.970 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:54.970 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:54.970 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:54.970 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:54.970 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.970 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:55.229 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:55.229 "name": "raid_bdev1", 00:24:55.229 "uuid": "d864624d-5c19-4de1-9969-98e29b0a307c", 00:24:55.229 "strip_size_kb": 0, 00:24:55.229 "state": "online", 00:24:55.229 "raid_level": "raid1", 00:24:55.229 "superblock": false, 00:24:55.229 "num_base_bdevs": 4, 00:24:55.229 "num_base_bdevs_discovered": 3, 00:24:55.229 "num_base_bdevs_operational": 3, 00:24:55.229 "process": { 00:24:55.229 "type": "rebuild", 00:24:55.229 "target": "spare", 00:24:55.229 "progress": { 00:24:55.229 "blocks": 36864, 00:24:55.229 "percent": 56 00:24:55.229 } 00:24:55.229 }, 00:24:55.229 "base_bdevs_list": [ 00:24:55.229 { 00:24:55.229 "name": "spare", 00:24:55.229 "uuid": "28ab752d-1288-5038-b370-a7cd633e2182", 00:24:55.229 "is_configured": true, 00:24:55.229 "data_offset": 0, 00:24:55.229 "data_size": 65536 00:24:55.229 }, 00:24:55.229 { 00:24:55.229 "name": null, 00:24:55.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.229 "is_configured": false, 00:24:55.229 "data_offset": 0, 00:24:55.229 "data_size": 65536 00:24:55.229 }, 00:24:55.229 { 00:24:55.229 "name": "BaseBdev3", 00:24:55.229 "uuid": "23b009e7-2a19-584d-b5c2-26e94dadcdb0", 00:24:55.229 "is_configured": true, 00:24:55.229 "data_offset": 0, 00:24:55.229 "data_size": 65536 00:24:55.229 }, 00:24:55.229 { 00:24:55.229 "name": "BaseBdev4", 00:24:55.229 "uuid": "a5525e38-30e4-525a-a09b-b34658b94497", 00:24:55.229 "is_configured": true, 00:24:55.229 "data_offset": 0, 00:24:55.229 "data_size": 65536 00:24:55.229 } 00:24:55.229 ] 00:24:55.229 }' 00:24:55.229 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:55.229 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:55.229 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:55.229 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:55.229 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=939 00:24:55.229 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:55.229 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:55.229 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:55.229 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:55.229 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:55.229 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:55.229 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:55.229 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.488 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:55.488 "name": "raid_bdev1", 00:24:55.488 "uuid": "d864624d-5c19-4de1-9969-98e29b0a307c", 00:24:55.488 "strip_size_kb": 0, 00:24:55.488 "state": "online", 00:24:55.488 "raid_level": "raid1", 00:24:55.488 "superblock": false, 00:24:55.488 "num_base_bdevs": 4, 00:24:55.488 "num_base_bdevs_discovered": 3, 00:24:55.488 "num_base_bdevs_operational": 3, 00:24:55.488 "process": { 00:24:55.488 "type": "rebuild", 00:24:55.488 "target": "spare", 00:24:55.488 "progress": { 00:24:55.488 "blocks": 45056, 00:24:55.488 "percent": 68 00:24:55.488 } 00:24:55.488 }, 00:24:55.489 "base_bdevs_list": [ 00:24:55.489 { 00:24:55.489 "name": "spare", 00:24:55.489 "uuid": "28ab752d-1288-5038-b370-a7cd633e2182", 00:24:55.489 "is_configured": true, 00:24:55.489 "data_offset": 0, 00:24:55.489 "data_size": 65536 00:24:55.489 }, 00:24:55.489 { 00:24:55.489 "name": null, 00:24:55.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.489 "is_configured": false, 00:24:55.489 "data_offset": 0, 00:24:55.489 "data_size": 65536 00:24:55.489 }, 00:24:55.489 { 00:24:55.489 "name": "BaseBdev3", 00:24:55.489 "uuid": "23b009e7-2a19-584d-b5c2-26e94dadcdb0", 00:24:55.489 "is_configured": true, 00:24:55.489 "data_offset": 0, 00:24:55.489 "data_size": 65536 00:24:55.489 }, 00:24:55.489 { 00:24:55.489 "name": "BaseBdev4", 00:24:55.489 "uuid": "a5525e38-30e4-525a-a09b-b34658b94497", 00:24:55.489 "is_configured": true, 00:24:55.489 "data_offset": 0, 00:24:55.489 "data_size": 65536 00:24:55.489 } 00:24:55.489 ] 00:24:55.489 }' 00:24:55.489 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:55.489 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:55.489 18:39:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:55.489 18:39:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:55.489 18:39:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:24:56.425 [2024-07-15 18:39:41.915880] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:56.425 [2024-07-15 18:39:41.915941] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:56.425 [2024-07-15 18:39:41.915986] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:56.684 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:56.684 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:56.684 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:56.684 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:56.684 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:56.684 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:56.684 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:56.684 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:56.943 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:56.943 "name": "raid_bdev1", 00:24:56.943 "uuid": "d864624d-5c19-4de1-9969-98e29b0a307c", 00:24:56.943 "strip_size_kb": 0, 00:24:56.943 "state": "online", 00:24:56.943 "raid_level": "raid1", 00:24:56.943 "superblock": false, 00:24:56.943 "num_base_bdevs": 4, 00:24:56.943 "num_base_bdevs_discovered": 3, 00:24:56.943 "num_base_bdevs_operational": 3, 00:24:56.944 "base_bdevs_list": [ 00:24:56.944 { 00:24:56.944 "name": "spare", 00:24:56.944 "uuid": "28ab752d-1288-5038-b370-a7cd633e2182", 00:24:56.944 "is_configured": true, 00:24:56.944 "data_offset": 0, 00:24:56.944 "data_size": 65536 00:24:56.944 }, 00:24:56.944 { 00:24:56.944 "name": null, 00:24:56.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.944 "is_configured": false, 00:24:56.944 "data_offset": 0, 00:24:56.944 "data_size": 65536 00:24:56.944 }, 00:24:56.944 { 00:24:56.944 "name": "BaseBdev3", 00:24:56.944 "uuid": "23b009e7-2a19-584d-b5c2-26e94dadcdb0", 00:24:56.944 "is_configured": true, 00:24:56.944 "data_offset": 0, 00:24:56.944 "data_size": 65536 00:24:56.944 }, 00:24:56.944 { 00:24:56.944 "name": "BaseBdev4", 00:24:56.944 "uuid": "a5525e38-30e4-525a-a09b-b34658b94497", 00:24:56.944 "is_configured": true, 00:24:56.944 "data_offset": 0, 00:24:56.944 "data_size": 65536 00:24:56.944 } 00:24:56.944 ] 00:24:56.944 }' 00:24:56.944 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:56.944 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:56.944 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:56.944 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:24:56.944 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:24:56.944 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:56.944 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:56.944 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:56.944 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:56.944 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:56.944 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:56.944 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:57.204 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:57.204 "name": "raid_bdev1", 00:24:57.204 "uuid": "d864624d-5c19-4de1-9969-98e29b0a307c", 00:24:57.204 "strip_size_kb": 0, 00:24:57.204 "state": "online", 00:24:57.204 "raid_level": "raid1", 00:24:57.204 "superblock": false, 00:24:57.204 "num_base_bdevs": 4, 00:24:57.204 "num_base_bdevs_discovered": 3, 00:24:57.204 "num_base_bdevs_operational": 3, 00:24:57.204 "base_bdevs_list": [ 00:24:57.204 { 00:24:57.204 "name": "spare", 00:24:57.204 "uuid": "28ab752d-1288-5038-b370-a7cd633e2182", 00:24:57.204 "is_configured": true, 00:24:57.204 "data_offset": 0, 00:24:57.204 "data_size": 65536 00:24:57.204 }, 00:24:57.204 { 00:24:57.204 "name": null, 00:24:57.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.204 "is_configured": false, 00:24:57.204 "data_offset": 0, 00:24:57.204 "data_size": 65536 00:24:57.204 }, 00:24:57.204 { 00:24:57.204 "name": "BaseBdev3", 00:24:57.204 "uuid": "23b009e7-2a19-584d-b5c2-26e94dadcdb0", 00:24:57.204 "is_configured": true, 00:24:57.204 "data_offset": 0, 00:24:57.204 "data_size": 65536 00:24:57.204 }, 00:24:57.204 { 00:24:57.204 "name": "BaseBdev4", 00:24:57.204 "uuid": "a5525e38-30e4-525a-a09b-b34658b94497", 00:24:57.204 "is_configured": true, 00:24:57.204 "data_offset": 0, 00:24:57.204 "data_size": 65536 00:24:57.204 } 00:24:57.204 ] 00:24:57.204 }' 00:24:57.204 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:57.204 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:57.204 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:57.204 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:57.204 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:57.204 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:57.204 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:57.204 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:57.204 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:57.204 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:57.204 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:57.204 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:57.204 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:57.204 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:57.204 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.204 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:57.463 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:57.463 "name": "raid_bdev1", 00:24:57.463 "uuid": "d864624d-5c19-4de1-9969-98e29b0a307c", 00:24:57.463 "strip_size_kb": 0, 00:24:57.463 "state": "online", 00:24:57.463 "raid_level": "raid1", 00:24:57.463 "superblock": false, 00:24:57.463 "num_base_bdevs": 4, 00:24:57.463 "num_base_bdevs_discovered": 3, 00:24:57.463 "num_base_bdevs_operational": 3, 00:24:57.463 "base_bdevs_list": [ 00:24:57.463 { 00:24:57.463 "name": "spare", 00:24:57.463 "uuid": "28ab752d-1288-5038-b370-a7cd633e2182", 00:24:57.463 "is_configured": true, 00:24:57.463 "data_offset": 0, 00:24:57.463 "data_size": 65536 00:24:57.463 }, 00:24:57.463 { 00:24:57.463 "name": null, 00:24:57.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.463 "is_configured": false, 00:24:57.463 "data_offset": 0, 00:24:57.463 "data_size": 65536 00:24:57.463 }, 00:24:57.463 { 00:24:57.463 "name": "BaseBdev3", 00:24:57.463 "uuid": "23b009e7-2a19-584d-b5c2-26e94dadcdb0", 00:24:57.463 "is_configured": true, 00:24:57.463 "data_offset": 0, 00:24:57.463 "data_size": 65536 00:24:57.463 }, 00:24:57.463 { 00:24:57.463 "name": "BaseBdev4", 00:24:57.463 "uuid": "a5525e38-30e4-525a-a09b-b34658b94497", 00:24:57.463 "is_configured": true, 00:24:57.463 "data_offset": 0, 00:24:57.463 "data_size": 65536 00:24:57.463 } 00:24:57.463 ] 00:24:57.463 }' 00:24:57.463 18:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:57.463 18:39:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.030 18:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:58.287 [2024-07-15 18:39:43.708576] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:58.287 [2024-07-15 18:39:43.708603] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:58.287 [2024-07-15 18:39:43.708658] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:58.287 [2024-07-15 18:39:43.708732] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:58.287 [2024-07-15 18:39:43.708741] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xad6990 name raid_bdev1, state offline 00:24:58.287 18:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:58.287 18:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:24:58.545 18:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:24:58.545 18:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:24:58.545 18:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:24:58.545 18:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:58.545 18:39:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:58.545 18:39:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:58.545 18:39:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:58.545 18:39:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:58.545 18:39:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:58.545 18:39:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:24:58.545 18:39:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:58.545 18:39:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:58.545 18:39:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:58.545 /dev/nbd0 00:24:58.545 18:39:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:58.545 18:39:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:58.545 18:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:24:58.545 18:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:24:58.545 18:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:58.545 18:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:58.545 18:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:24:58.545 18:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:24:58.545 18:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:58.545 18:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:58.545 18:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:58.803 1+0 records in 00:24:58.803 1+0 records out 00:24:58.803 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223835 s, 18.3 MB/s 00:24:58.803 18:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:58.803 18:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:24:58.803 18:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:58.803 18:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:58.803 18:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:24:58.803 18:39:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:58.803 18:39:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:58.803 18:39:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:58.804 /dev/nbd1 00:24:59.063 18:39:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:59.063 18:39:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:59.063 18:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:24:59.063 18:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:24:59.063 18:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:59.063 18:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:59.063 18:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:24:59.063 18:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:24:59.063 18:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:59.063 18:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:59.063 18:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:59.063 1+0 records in 00:24:59.063 1+0 records out 00:24:59.063 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252751 s, 16.2 MB/s 00:24:59.063 18:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:59.063 18:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:24:59.063 18:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:59.063 18:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:59.063 18:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:24:59.063 18:39:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:59.063 18:39:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:59.063 18:39:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:59.063 18:39:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:59.063 18:39:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:59.063 18:39:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:59.063 18:39:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:59.063 18:39:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:24:59.063 18:39:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:59.063 18:39:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:59.321 18:39:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:59.321 18:39:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:59.321 18:39:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:59.321 18:39:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:59.321 18:39:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:59.321 18:39:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:59.321 18:39:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:24:59.321 18:39:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:24:59.321 18:39:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:59.321 18:39:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:59.580 18:39:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:59.580 18:39:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:59.580 18:39:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:59.580 18:39:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:59.580 18:39:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:59.580 18:39:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:59.580 18:39:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:24:59.580 18:39:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:24:59.580 18:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:24:59.580 18:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 2908309 00:24:59.580 18:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 2908309 ']' 00:24:59.580 18:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 2908309 00:24:59.580 18:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:24:59.580 18:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:59.580 18:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2908309 00:24:59.580 18:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:59.580 18:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:59.580 18:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2908309' 00:24:59.580 killing process with pid 2908309 00:24:59.580 18:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@967 -- # kill 2908309 00:24:59.580 Received shutdown signal, test time was about 60.000000 seconds 00:24:59.580 00:24:59.580 Latency(us) 00:24:59.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:59.580 =================================================================================================================== 00:24:59.580 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:59.580 [2024-07-15 18:39:45.072342] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:59.580 18:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # wait 2908309 00:24:59.580 [2024-07-15 18:39:45.117568] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:24:59.839 00:24:59.839 real 0m24.591s 00:24:59.839 user 0m34.329s 00:24:59.839 sys 0m4.307s 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.839 ************************************ 00:24:59.839 END TEST raid_rebuild_test 00:24:59.839 ************************************ 00:24:59.839 18:39:45 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:24:59.839 18:39:45 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:24:59.839 18:39:45 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:24:59.839 18:39:45 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:59.839 18:39:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:59.839 ************************************ 00:24:59.839 START TEST raid_rebuild_test_sb 00:24:59.839 ************************************ 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 true false true 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=2912287 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 2912287 /var/tmp/spdk-raid.sock 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2912287 ']' 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:59.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:59.839 18:39:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:00.098 [2024-07-15 18:39:45.426901] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:25:00.098 [2024-07-15 18:39:45.426973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2912287 ] 00:25:00.098 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:00.098 Zero copy mechanism will not be used. 00:25:00.098 [2024-07-15 18:39:45.528349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.098 [2024-07-15 18:39:45.618082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.356 [2024-07-15 18:39:45.688053] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:00.356 [2024-07-15 18:39:45.688108] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:00.923 18:39:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:00.923 18:39:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:25:00.923 18:39:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:00.923 18:39:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:01.181 BaseBdev1_malloc 00:25:01.181 18:39:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:01.181 [2024-07-15 18:39:46.726217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:01.181 [2024-07-15 18:39:46.726261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:01.181 [2024-07-15 18:39:46.726281] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x277c130 00:25:01.181 [2024-07-15 18:39:46.726291] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:01.181 [2024-07-15 18:39:46.727985] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:01.181 [2024-07-15 18:39:46.728013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:01.181 BaseBdev1 00:25:01.440 18:39:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:01.440 18:39:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:01.698 BaseBdev2_malloc 00:25:01.698 18:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:01.698 [2024-07-15 18:39:47.243983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:01.698 [2024-07-15 18:39:47.244022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:01.698 [2024-07-15 18:39:47.244038] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2921fa0 00:25:01.698 [2024-07-15 18:39:47.244047] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:01.698 [2024-07-15 18:39:47.245495] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:01.698 [2024-07-15 18:39:47.245521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:01.698 BaseBdev2 00:25:01.956 18:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:01.956 18:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:02.214 BaseBdev3_malloc 00:25:02.214 18:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:02.473 [2024-07-15 18:39:47.769850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:02.473 [2024-07-15 18:39:47.769891] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:02.473 [2024-07-15 18:39:47.769908] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x292d970 00:25:02.473 [2024-07-15 18:39:47.769918] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:02.473 [2024-07-15 18:39:47.771420] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:02.473 [2024-07-15 18:39:47.771446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:02.473 BaseBdev3 00:25:02.473 18:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:02.473 18:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:02.731 BaseBdev4_malloc 00:25:02.731 18:39:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:02.989 [2024-07-15 18:39:48.291674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:02.989 [2024-07-15 18:39:48.291716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:02.989 [2024-07-15 18:39:48.291735] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x29248c0 00:25:02.989 [2024-07-15 18:39:48.291745] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:02.989 [2024-07-15 18:39:48.293190] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:02.989 [2024-07-15 18:39:48.293215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:02.989 BaseBdev4 00:25:02.989 18:39:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:03.248 spare_malloc 00:25:03.248 18:39:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:03.506 spare_delay 00:25:03.506 18:39:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:03.764 [2024-07-15 18:39:49.078048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:03.764 [2024-07-15 18:39:49.078088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:03.764 [2024-07-15 18:39:49.078104] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2774bf0 00:25:03.764 [2024-07-15 18:39:49.078114] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:03.764 [2024-07-15 18:39:49.079591] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:03.764 [2024-07-15 18:39:49.079616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:03.764 spare 00:25:03.764 18:39:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:04.023 [2024-07-15 18:39:49.338776] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:04.023 [2024-07-15 18:39:49.340013] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:04.023 [2024-07-15 18:39:49.340068] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:04.023 [2024-07-15 18:39:49.340113] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:04.023 [2024-07-15 18:39:49.340301] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2776990 00:25:04.023 [2024-07-15 18:39:49.340311] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:04.023 [2024-07-15 18:39:49.340496] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x29213b0 00:25:04.023 [2024-07-15 18:39:49.340642] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2776990 00:25:04.023 [2024-07-15 18:39:49.340650] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2776990 00:25:04.023 [2024-07-15 18:39:49.340743] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:04.023 18:39:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:25:04.023 18:39:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:04.023 18:39:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:04.023 18:39:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:04.023 18:39:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:04.023 18:39:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:04.023 18:39:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:04.023 18:39:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:04.023 18:39:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:04.023 18:39:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:04.023 18:39:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:04.023 18:39:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:04.281 18:39:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:04.281 "name": "raid_bdev1", 00:25:04.281 "uuid": "b6f56bfe-a3c3-411f-af85-fb68bf1b518d", 00:25:04.281 "strip_size_kb": 0, 00:25:04.281 "state": "online", 00:25:04.281 "raid_level": "raid1", 00:25:04.281 "superblock": true, 00:25:04.281 "num_base_bdevs": 4, 00:25:04.281 "num_base_bdevs_discovered": 4, 00:25:04.281 "num_base_bdevs_operational": 4, 00:25:04.281 "base_bdevs_list": [ 00:25:04.281 { 00:25:04.281 "name": "BaseBdev1", 00:25:04.281 "uuid": "d6ff5dd7-c169-5e7f-9581-8395f4093279", 00:25:04.281 "is_configured": true, 00:25:04.281 "data_offset": 2048, 00:25:04.281 "data_size": 63488 00:25:04.281 }, 00:25:04.281 { 00:25:04.281 "name": "BaseBdev2", 00:25:04.281 "uuid": "7d36bbab-1b89-533c-ad03-65e803111b05", 00:25:04.281 "is_configured": true, 00:25:04.281 "data_offset": 2048, 00:25:04.281 "data_size": 63488 00:25:04.281 }, 00:25:04.281 { 00:25:04.281 "name": "BaseBdev3", 00:25:04.281 "uuid": "2f6c355d-c814-5c78-9be6-8565703e361e", 00:25:04.281 "is_configured": true, 00:25:04.281 "data_offset": 2048, 00:25:04.281 "data_size": 63488 00:25:04.281 }, 00:25:04.281 { 00:25:04.281 "name": "BaseBdev4", 00:25:04.281 "uuid": "10b28d88-9505-5424-a20c-3893fa68850c", 00:25:04.281 "is_configured": true, 00:25:04.281 "data_offset": 2048, 00:25:04.281 "data_size": 63488 00:25:04.281 } 00:25:04.281 ] 00:25:04.281 }' 00:25:04.281 18:39:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:04.281 18:39:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:04.849 18:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:04.849 18:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:25:05.107 [2024-07-15 18:39:50.498212] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:05.107 18:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:25:05.107 18:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:05.107 18:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:05.366 18:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:25:05.366 18:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:25:05.366 18:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:25:05.366 18:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:25:05.366 18:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:25:05.366 18:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:05.366 18:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:05.366 18:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:05.366 18:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:05.366 18:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:05.366 18:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:25:05.366 18:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:05.366 18:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:05.366 18:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:05.627 [2024-07-15 18:39:51.019304] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2921770 00:25:05.627 /dev/nbd0 00:25:05.627 18:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:05.627 18:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:05.627 18:39:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:25:05.627 18:39:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:25:05.627 18:39:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:05.627 18:39:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:05.627 18:39:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:25:05.627 18:39:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:25:05.627 18:39:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:05.627 18:39:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:05.627 18:39:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:05.627 1+0 records in 00:25:05.627 1+0 records out 00:25:05.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209969 s, 19.5 MB/s 00:25:05.627 18:39:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:05.627 18:39:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:25:05.627 18:39:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:05.627 18:39:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:05.627 18:39:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:25:05.627 18:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:05.627 18:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:05.627 18:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:25:05.627 18:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:25:05.627 18:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:25:13.819 63488+0 records in 00:25:13.819 63488+0 records out 00:25:13.819 32505856 bytes (33 MB, 31 MiB) copied, 6.98161 s, 4.7 MB/s 00:25:13.819 18:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:13.819 18:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:13.819 18:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:13.819 18:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:13.819 18:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:25:13.819 18:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:13.819 18:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:13.819 [2024-07-15 18:39:58.328633] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:13.819 18:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:13.819 18:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:13.819 18:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:13.819 18:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:13.819 18:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:13.819 18:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:13.819 18:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:25:13.819 18:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:25:13.819 18:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:13.819 [2024-07-15 18:39:58.493124] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:13.819 18:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:13.819 18:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:13.819 18:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:13.819 18:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:13.819 18:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:13.819 18:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:13.819 18:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:13.819 18:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:13.819 18:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:13.819 18:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:13.819 18:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.819 18:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.819 18:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:13.819 "name": "raid_bdev1", 00:25:13.819 "uuid": "b6f56bfe-a3c3-411f-af85-fb68bf1b518d", 00:25:13.819 "strip_size_kb": 0, 00:25:13.819 "state": "online", 00:25:13.819 "raid_level": "raid1", 00:25:13.819 "superblock": true, 00:25:13.819 "num_base_bdevs": 4, 00:25:13.819 "num_base_bdevs_discovered": 3, 00:25:13.819 "num_base_bdevs_operational": 3, 00:25:13.819 "base_bdevs_list": [ 00:25:13.819 { 00:25:13.819 "name": null, 00:25:13.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.819 "is_configured": false, 00:25:13.819 "data_offset": 2048, 00:25:13.819 "data_size": 63488 00:25:13.819 }, 00:25:13.819 { 00:25:13.819 "name": "BaseBdev2", 00:25:13.819 "uuid": "7d36bbab-1b89-533c-ad03-65e803111b05", 00:25:13.819 "is_configured": true, 00:25:13.819 "data_offset": 2048, 00:25:13.819 "data_size": 63488 00:25:13.819 }, 00:25:13.819 { 00:25:13.819 "name": "BaseBdev3", 00:25:13.819 "uuid": "2f6c355d-c814-5c78-9be6-8565703e361e", 00:25:13.819 "is_configured": true, 00:25:13.819 "data_offset": 2048, 00:25:13.819 "data_size": 63488 00:25:13.819 }, 00:25:13.819 { 00:25:13.819 "name": "BaseBdev4", 00:25:13.819 "uuid": "10b28d88-9505-5424-a20c-3893fa68850c", 00:25:13.819 "is_configured": true, 00:25:13.819 "data_offset": 2048, 00:25:13.819 "data_size": 63488 00:25:13.819 } 00:25:13.819 ] 00:25:13.819 }' 00:25:13.819 18:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:13.819 18:39:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:14.078 18:39:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:14.336 [2024-07-15 18:39:59.652266] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:14.336 [2024-07-15 18:39:59.656230] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2482b40 00:25:14.336 [2024-07-15 18:39:59.658316] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:14.336 18:39:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:25:15.272 18:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:15.272 18:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:15.272 18:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:15.272 18:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:15.272 18:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:15.272 18:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.272 18:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.531 18:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:15.531 "name": "raid_bdev1", 00:25:15.531 "uuid": "b6f56bfe-a3c3-411f-af85-fb68bf1b518d", 00:25:15.531 "strip_size_kb": 0, 00:25:15.531 "state": "online", 00:25:15.531 "raid_level": "raid1", 00:25:15.531 "superblock": true, 00:25:15.531 "num_base_bdevs": 4, 00:25:15.531 "num_base_bdevs_discovered": 4, 00:25:15.531 "num_base_bdevs_operational": 4, 00:25:15.531 "process": { 00:25:15.531 "type": "rebuild", 00:25:15.531 "target": "spare", 00:25:15.531 "progress": { 00:25:15.531 "blocks": 24576, 00:25:15.531 "percent": 38 00:25:15.531 } 00:25:15.531 }, 00:25:15.531 "base_bdevs_list": [ 00:25:15.531 { 00:25:15.531 "name": "spare", 00:25:15.531 "uuid": "df369f28-51c5-5f4b-8d7a-259f36781801", 00:25:15.531 "is_configured": true, 00:25:15.531 "data_offset": 2048, 00:25:15.531 "data_size": 63488 00:25:15.531 }, 00:25:15.531 { 00:25:15.531 "name": "BaseBdev2", 00:25:15.531 "uuid": "7d36bbab-1b89-533c-ad03-65e803111b05", 00:25:15.531 "is_configured": true, 00:25:15.531 "data_offset": 2048, 00:25:15.531 "data_size": 63488 00:25:15.531 }, 00:25:15.531 { 00:25:15.531 "name": "BaseBdev3", 00:25:15.531 "uuid": "2f6c355d-c814-5c78-9be6-8565703e361e", 00:25:15.531 "is_configured": true, 00:25:15.531 "data_offset": 2048, 00:25:15.531 "data_size": 63488 00:25:15.531 }, 00:25:15.531 { 00:25:15.531 "name": "BaseBdev4", 00:25:15.531 "uuid": "10b28d88-9505-5424-a20c-3893fa68850c", 00:25:15.531 "is_configured": true, 00:25:15.531 "data_offset": 2048, 00:25:15.531 "data_size": 63488 00:25:15.531 } 00:25:15.531 ] 00:25:15.531 }' 00:25:15.531 18:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:15.531 18:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:15.531 18:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:15.531 18:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:15.531 18:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:15.789 [2024-07-15 18:40:01.278049] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:16.048 [2024-07-15 18:40:01.371194] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:16.048 [2024-07-15 18:40:01.371240] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:16.048 [2024-07-15 18:40:01.371256] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:16.048 [2024-07-15 18:40:01.371263] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:16.048 18:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:16.048 18:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:16.048 18:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:16.048 18:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:16.048 18:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:16.048 18:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:16.048 18:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:16.048 18:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:16.048 18:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:16.048 18:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:16.048 18:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.048 18:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.307 18:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:16.307 "name": "raid_bdev1", 00:25:16.307 "uuid": "b6f56bfe-a3c3-411f-af85-fb68bf1b518d", 00:25:16.307 "strip_size_kb": 0, 00:25:16.307 "state": "online", 00:25:16.307 "raid_level": "raid1", 00:25:16.307 "superblock": true, 00:25:16.307 "num_base_bdevs": 4, 00:25:16.307 "num_base_bdevs_discovered": 3, 00:25:16.307 "num_base_bdevs_operational": 3, 00:25:16.307 "base_bdevs_list": [ 00:25:16.307 { 00:25:16.307 "name": null, 00:25:16.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.307 "is_configured": false, 00:25:16.307 "data_offset": 2048, 00:25:16.307 "data_size": 63488 00:25:16.307 }, 00:25:16.307 { 00:25:16.307 "name": "BaseBdev2", 00:25:16.307 "uuid": "7d36bbab-1b89-533c-ad03-65e803111b05", 00:25:16.307 "is_configured": true, 00:25:16.307 "data_offset": 2048, 00:25:16.307 "data_size": 63488 00:25:16.307 }, 00:25:16.307 { 00:25:16.307 "name": "BaseBdev3", 00:25:16.307 "uuid": "2f6c355d-c814-5c78-9be6-8565703e361e", 00:25:16.307 "is_configured": true, 00:25:16.307 "data_offset": 2048, 00:25:16.307 "data_size": 63488 00:25:16.307 }, 00:25:16.307 { 00:25:16.307 "name": "BaseBdev4", 00:25:16.307 "uuid": "10b28d88-9505-5424-a20c-3893fa68850c", 00:25:16.307 "is_configured": true, 00:25:16.307 "data_offset": 2048, 00:25:16.307 "data_size": 63488 00:25:16.307 } 00:25:16.307 ] 00:25:16.307 }' 00:25:16.307 18:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:16.307 18:40:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:16.874 18:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:16.874 18:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:16.874 18:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:16.875 18:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:16.875 18:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:16.875 18:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.875 18:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:17.133 18:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:17.133 "name": "raid_bdev1", 00:25:17.133 "uuid": "b6f56bfe-a3c3-411f-af85-fb68bf1b518d", 00:25:17.133 "strip_size_kb": 0, 00:25:17.133 "state": "online", 00:25:17.133 "raid_level": "raid1", 00:25:17.133 "superblock": true, 00:25:17.133 "num_base_bdevs": 4, 00:25:17.133 "num_base_bdevs_discovered": 3, 00:25:17.133 "num_base_bdevs_operational": 3, 00:25:17.133 "base_bdevs_list": [ 00:25:17.133 { 00:25:17.133 "name": null, 00:25:17.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:17.133 "is_configured": false, 00:25:17.133 "data_offset": 2048, 00:25:17.133 "data_size": 63488 00:25:17.133 }, 00:25:17.133 { 00:25:17.133 "name": "BaseBdev2", 00:25:17.133 "uuid": "7d36bbab-1b89-533c-ad03-65e803111b05", 00:25:17.133 "is_configured": true, 00:25:17.133 "data_offset": 2048, 00:25:17.133 "data_size": 63488 00:25:17.133 }, 00:25:17.133 { 00:25:17.133 "name": "BaseBdev3", 00:25:17.134 "uuid": "2f6c355d-c814-5c78-9be6-8565703e361e", 00:25:17.134 "is_configured": true, 00:25:17.134 "data_offset": 2048, 00:25:17.134 "data_size": 63488 00:25:17.134 }, 00:25:17.134 { 00:25:17.134 "name": "BaseBdev4", 00:25:17.134 "uuid": "10b28d88-9505-5424-a20c-3893fa68850c", 00:25:17.134 "is_configured": true, 00:25:17.134 "data_offset": 2048, 00:25:17.134 "data_size": 63488 00:25:17.134 } 00:25:17.134 ] 00:25:17.134 }' 00:25:17.134 18:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:17.134 18:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:17.134 18:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:17.134 18:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:17.134 18:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:17.392 [2024-07-15 18:40:02.871222] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:17.392 [2024-07-15 18:40:02.875255] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x277ba80 00:25:17.392 [2024-07-15 18:40:02.876798] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:17.392 18:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:18.768 18:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:18.768 18:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:18.768 18:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:18.768 18:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:18.768 18:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:18.768 18:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.768 18:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.768 18:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:18.768 "name": "raid_bdev1", 00:25:18.768 "uuid": "b6f56bfe-a3c3-411f-af85-fb68bf1b518d", 00:25:18.768 "strip_size_kb": 0, 00:25:18.768 "state": "online", 00:25:18.768 "raid_level": "raid1", 00:25:18.768 "superblock": true, 00:25:18.768 "num_base_bdevs": 4, 00:25:18.768 "num_base_bdevs_discovered": 4, 00:25:18.768 "num_base_bdevs_operational": 4, 00:25:18.768 "process": { 00:25:18.768 "type": "rebuild", 00:25:18.768 "target": "spare", 00:25:18.768 "progress": { 00:25:18.768 "blocks": 24576, 00:25:18.768 "percent": 38 00:25:18.768 } 00:25:18.768 }, 00:25:18.768 "base_bdevs_list": [ 00:25:18.768 { 00:25:18.768 "name": "spare", 00:25:18.768 "uuid": "df369f28-51c5-5f4b-8d7a-259f36781801", 00:25:18.768 "is_configured": true, 00:25:18.768 "data_offset": 2048, 00:25:18.768 "data_size": 63488 00:25:18.768 }, 00:25:18.768 { 00:25:18.768 "name": "BaseBdev2", 00:25:18.768 "uuid": "7d36bbab-1b89-533c-ad03-65e803111b05", 00:25:18.768 "is_configured": true, 00:25:18.768 "data_offset": 2048, 00:25:18.768 "data_size": 63488 00:25:18.768 }, 00:25:18.768 { 00:25:18.768 "name": "BaseBdev3", 00:25:18.768 "uuid": "2f6c355d-c814-5c78-9be6-8565703e361e", 00:25:18.768 "is_configured": true, 00:25:18.768 "data_offset": 2048, 00:25:18.768 "data_size": 63488 00:25:18.768 }, 00:25:18.768 { 00:25:18.768 "name": "BaseBdev4", 00:25:18.768 "uuid": "10b28d88-9505-5424-a20c-3893fa68850c", 00:25:18.768 "is_configured": true, 00:25:18.768 "data_offset": 2048, 00:25:18.768 "data_size": 63488 00:25:18.768 } 00:25:18.768 ] 00:25:18.768 }' 00:25:18.768 18:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:18.768 18:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:18.768 18:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:18.768 18:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:18.768 18:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:25:18.768 18:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:25:18.768 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:25:18.768 18:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:25:18.768 18:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:25:18.768 18:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:25:18.768 18:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:19.027 [2024-07-15 18:40:04.488919] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:19.286 [2024-07-15 18:40:04.690026] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x277ba80 00:25:19.286 18:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:25:19.286 18:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:25:19.286 18:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:19.286 18:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:19.286 18:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:19.286 18:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:19.286 18:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:19.286 18:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.286 18:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.545 18:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:19.545 "name": "raid_bdev1", 00:25:19.545 "uuid": "b6f56bfe-a3c3-411f-af85-fb68bf1b518d", 00:25:19.545 "strip_size_kb": 0, 00:25:19.545 "state": "online", 00:25:19.545 "raid_level": "raid1", 00:25:19.545 "superblock": true, 00:25:19.545 "num_base_bdevs": 4, 00:25:19.545 "num_base_bdevs_discovered": 3, 00:25:19.545 "num_base_bdevs_operational": 3, 00:25:19.545 "process": { 00:25:19.545 "type": "rebuild", 00:25:19.545 "target": "spare", 00:25:19.545 "progress": { 00:25:19.545 "blocks": 38912, 00:25:19.545 "percent": 61 00:25:19.545 } 00:25:19.545 }, 00:25:19.545 "base_bdevs_list": [ 00:25:19.545 { 00:25:19.545 "name": "spare", 00:25:19.545 "uuid": "df369f28-51c5-5f4b-8d7a-259f36781801", 00:25:19.545 "is_configured": true, 00:25:19.545 "data_offset": 2048, 00:25:19.545 "data_size": 63488 00:25:19.545 }, 00:25:19.545 { 00:25:19.545 "name": null, 00:25:19.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.545 "is_configured": false, 00:25:19.545 "data_offset": 2048, 00:25:19.545 "data_size": 63488 00:25:19.545 }, 00:25:19.545 { 00:25:19.545 "name": "BaseBdev3", 00:25:19.545 "uuid": "2f6c355d-c814-5c78-9be6-8565703e361e", 00:25:19.545 "is_configured": true, 00:25:19.545 "data_offset": 2048, 00:25:19.545 "data_size": 63488 00:25:19.545 }, 00:25:19.545 { 00:25:19.545 "name": "BaseBdev4", 00:25:19.545 "uuid": "10b28d88-9505-5424-a20c-3893fa68850c", 00:25:19.545 "is_configured": true, 00:25:19.545 "data_offset": 2048, 00:25:19.545 "data_size": 63488 00:25:19.545 } 00:25:19.545 ] 00:25:19.545 }' 00:25:19.545 18:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:19.545 18:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:19.545 18:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:19.545 18:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:19.545 18:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=964 00:25:19.545 18:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:25:19.545 18:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:19.545 18:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:19.545 18:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:19.545 18:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:19.545 18:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:19.545 18:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.545 18:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.804 18:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:19.804 "name": "raid_bdev1", 00:25:19.804 "uuid": "b6f56bfe-a3c3-411f-af85-fb68bf1b518d", 00:25:19.804 "strip_size_kb": 0, 00:25:19.804 "state": "online", 00:25:19.804 "raid_level": "raid1", 00:25:19.804 "superblock": true, 00:25:19.804 "num_base_bdevs": 4, 00:25:19.804 "num_base_bdevs_discovered": 3, 00:25:19.804 "num_base_bdevs_operational": 3, 00:25:19.804 "process": { 00:25:19.804 "type": "rebuild", 00:25:19.804 "target": "spare", 00:25:19.804 "progress": { 00:25:19.804 "blocks": 47104, 00:25:19.804 "percent": 74 00:25:19.804 } 00:25:19.804 }, 00:25:19.804 "base_bdevs_list": [ 00:25:19.804 { 00:25:19.804 "name": "spare", 00:25:19.804 "uuid": "df369f28-51c5-5f4b-8d7a-259f36781801", 00:25:19.804 "is_configured": true, 00:25:19.804 "data_offset": 2048, 00:25:19.804 "data_size": 63488 00:25:19.804 }, 00:25:19.804 { 00:25:19.804 "name": null, 00:25:19.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.804 "is_configured": false, 00:25:19.804 "data_offset": 2048, 00:25:19.804 "data_size": 63488 00:25:19.804 }, 00:25:19.804 { 00:25:19.804 "name": "BaseBdev3", 00:25:19.804 "uuid": "2f6c355d-c814-5c78-9be6-8565703e361e", 00:25:19.804 "is_configured": true, 00:25:19.804 "data_offset": 2048, 00:25:19.804 "data_size": 63488 00:25:19.804 }, 00:25:19.804 { 00:25:19.804 "name": "BaseBdev4", 00:25:19.804 "uuid": "10b28d88-9505-5424-a20c-3893fa68850c", 00:25:19.804 "is_configured": true, 00:25:19.804 "data_offset": 2048, 00:25:19.804 "data_size": 63488 00:25:19.804 } 00:25:19.804 ] 00:25:19.804 }' 00:25:19.804 18:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:20.063 18:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:20.063 18:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:20.063 18:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:20.063 18:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:25:20.630 [2024-07-15 18:40:06.100573] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:20.630 [2024-07-15 18:40:06.100630] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:20.630 [2024-07-15 18:40:06.100728] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:20.889 18:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:25:20.889 18:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:20.889 18:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:20.889 18:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:20.889 18:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:20.889 18:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:20.889 18:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.889 18:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:21.148 18:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:21.148 "name": "raid_bdev1", 00:25:21.148 "uuid": "b6f56bfe-a3c3-411f-af85-fb68bf1b518d", 00:25:21.148 "strip_size_kb": 0, 00:25:21.148 "state": "online", 00:25:21.148 "raid_level": "raid1", 00:25:21.148 "superblock": true, 00:25:21.148 "num_base_bdevs": 4, 00:25:21.148 "num_base_bdevs_discovered": 3, 00:25:21.148 "num_base_bdevs_operational": 3, 00:25:21.148 "base_bdevs_list": [ 00:25:21.148 { 00:25:21.148 "name": "spare", 00:25:21.148 "uuid": "df369f28-51c5-5f4b-8d7a-259f36781801", 00:25:21.148 "is_configured": true, 00:25:21.148 "data_offset": 2048, 00:25:21.148 "data_size": 63488 00:25:21.148 }, 00:25:21.148 { 00:25:21.148 "name": null, 00:25:21.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:21.148 "is_configured": false, 00:25:21.148 "data_offset": 2048, 00:25:21.148 "data_size": 63488 00:25:21.148 }, 00:25:21.148 { 00:25:21.148 "name": "BaseBdev3", 00:25:21.148 "uuid": "2f6c355d-c814-5c78-9be6-8565703e361e", 00:25:21.148 "is_configured": true, 00:25:21.148 "data_offset": 2048, 00:25:21.148 "data_size": 63488 00:25:21.148 }, 00:25:21.148 { 00:25:21.148 "name": "BaseBdev4", 00:25:21.148 "uuid": "10b28d88-9505-5424-a20c-3893fa68850c", 00:25:21.148 "is_configured": true, 00:25:21.148 "data_offset": 2048, 00:25:21.148 "data_size": 63488 00:25:21.148 } 00:25:21.148 ] 00:25:21.148 }' 00:25:21.148 18:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:21.406 18:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:21.406 18:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:21.406 18:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:25:21.406 18:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:25:21.406 18:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:21.406 18:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:21.406 18:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:21.406 18:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:21.406 18:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:21.406 18:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:21.406 18:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:21.665 18:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:21.665 "name": "raid_bdev1", 00:25:21.665 "uuid": "b6f56bfe-a3c3-411f-af85-fb68bf1b518d", 00:25:21.665 "strip_size_kb": 0, 00:25:21.665 "state": "online", 00:25:21.665 "raid_level": "raid1", 00:25:21.665 "superblock": true, 00:25:21.665 "num_base_bdevs": 4, 00:25:21.665 "num_base_bdevs_discovered": 3, 00:25:21.665 "num_base_bdevs_operational": 3, 00:25:21.665 "base_bdevs_list": [ 00:25:21.665 { 00:25:21.665 "name": "spare", 00:25:21.665 "uuid": "df369f28-51c5-5f4b-8d7a-259f36781801", 00:25:21.665 "is_configured": true, 00:25:21.665 "data_offset": 2048, 00:25:21.665 "data_size": 63488 00:25:21.665 }, 00:25:21.665 { 00:25:21.665 "name": null, 00:25:21.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:21.665 "is_configured": false, 00:25:21.665 "data_offset": 2048, 00:25:21.665 "data_size": 63488 00:25:21.665 }, 00:25:21.665 { 00:25:21.665 "name": "BaseBdev3", 00:25:21.665 "uuid": "2f6c355d-c814-5c78-9be6-8565703e361e", 00:25:21.665 "is_configured": true, 00:25:21.665 "data_offset": 2048, 00:25:21.665 "data_size": 63488 00:25:21.665 }, 00:25:21.665 { 00:25:21.665 "name": "BaseBdev4", 00:25:21.665 "uuid": "10b28d88-9505-5424-a20c-3893fa68850c", 00:25:21.665 "is_configured": true, 00:25:21.665 "data_offset": 2048, 00:25:21.665 "data_size": 63488 00:25:21.665 } 00:25:21.665 ] 00:25:21.665 }' 00:25:21.665 18:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:21.665 18:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:21.665 18:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:21.665 18:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:21.665 18:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:21.665 18:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:21.665 18:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:21.665 18:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:21.665 18:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:21.665 18:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:21.665 18:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:21.665 18:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:21.665 18:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:21.665 18:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:21.665 18:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:21.665 18:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:21.924 18:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:21.924 "name": "raid_bdev1", 00:25:21.924 "uuid": "b6f56bfe-a3c3-411f-af85-fb68bf1b518d", 00:25:21.924 "strip_size_kb": 0, 00:25:21.924 "state": "online", 00:25:21.924 "raid_level": "raid1", 00:25:21.924 "superblock": true, 00:25:21.924 "num_base_bdevs": 4, 00:25:21.924 "num_base_bdevs_discovered": 3, 00:25:21.924 "num_base_bdevs_operational": 3, 00:25:21.924 "base_bdevs_list": [ 00:25:21.924 { 00:25:21.924 "name": "spare", 00:25:21.924 "uuid": "df369f28-51c5-5f4b-8d7a-259f36781801", 00:25:21.924 "is_configured": true, 00:25:21.924 "data_offset": 2048, 00:25:21.924 "data_size": 63488 00:25:21.924 }, 00:25:21.924 { 00:25:21.924 "name": null, 00:25:21.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:21.924 "is_configured": false, 00:25:21.924 "data_offset": 2048, 00:25:21.924 "data_size": 63488 00:25:21.924 }, 00:25:21.924 { 00:25:21.924 "name": "BaseBdev3", 00:25:21.924 "uuid": "2f6c355d-c814-5c78-9be6-8565703e361e", 00:25:21.924 "is_configured": true, 00:25:21.924 "data_offset": 2048, 00:25:21.924 "data_size": 63488 00:25:21.924 }, 00:25:21.924 { 00:25:21.924 "name": "BaseBdev4", 00:25:21.924 "uuid": "10b28d88-9505-5424-a20c-3893fa68850c", 00:25:21.924 "is_configured": true, 00:25:21.924 "data_offset": 2048, 00:25:21.924 "data_size": 63488 00:25:21.924 } 00:25:21.924 ] 00:25:21.924 }' 00:25:21.924 18:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:21.924 18:40:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:22.491 18:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:22.750 [2024-07-15 18:40:08.205891] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:22.750 [2024-07-15 18:40:08.205919] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:22.750 [2024-07-15 18:40:08.205980] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:22.750 [2024-07-15 18:40:08.206055] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:22.750 [2024-07-15 18:40:08.206064] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2776990 name raid_bdev1, state offline 00:25:22.750 18:40:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.750 18:40:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:25:23.008 18:40:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:25:23.008 18:40:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:25:23.008 18:40:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:25:23.008 18:40:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:23.008 18:40:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:23.008 18:40:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:23.008 18:40:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:23.008 18:40:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:23.008 18:40:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:23.008 18:40:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:25:23.009 18:40:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:23.009 18:40:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:23.009 18:40:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:23.267 /dev/nbd0 00:25:23.267 18:40:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:23.267 18:40:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:23.267 18:40:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:25:23.267 18:40:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:25:23.267 18:40:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:23.267 18:40:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:23.267 18:40:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:25:23.267 18:40:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:25:23.267 18:40:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:23.267 18:40:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:23.267 18:40:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:23.268 1+0 records in 00:25:23.268 1+0 records out 00:25:23.268 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227763 s, 18.0 MB/s 00:25:23.268 18:40:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:23.268 18:40:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:25:23.268 18:40:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:23.268 18:40:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:23.268 18:40:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:25:23.268 18:40:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:23.268 18:40:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:23.268 18:40:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:23.526 /dev/nbd1 00:25:23.526 18:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:23.526 18:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:23.526 18:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:25:23.526 18:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:25:23.526 18:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:23.526 18:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:23.526 18:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:25:23.526 18:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:25:23.526 18:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:23.526 18:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:23.526 18:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:23.526 1+0 records in 00:25:23.526 1+0 records out 00:25:23.526 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271645 s, 15.1 MB/s 00:25:23.526 18:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:23.526 18:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:25:23.526 18:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:23.526 18:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:23.526 18:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:25:23.526 18:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:23.526 18:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:23.526 18:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:23.784 18:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:23.784 18:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:23.784 18:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:23.784 18:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:23.784 18:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:25:23.784 18:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:23.784 18:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:24.042 18:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:24.042 18:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:24.042 18:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:24.042 18:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:24.042 18:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:24.042 18:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:24.042 18:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:25:24.042 18:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:25:24.042 18:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:24.042 18:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:24.300 18:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:24.300 18:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:24.300 18:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:24.300 18:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:24.300 18:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:24.300 18:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:24.300 18:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:25:24.300 18:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:25:24.300 18:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:25:24.300 18:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:24.558 18:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:24.817 [2024-07-15 18:40:10.184618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:24.818 [2024-07-15 18:40:10.184671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:24.818 [2024-07-15 18:40:10.184691] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2774e20 00:25:24.818 [2024-07-15 18:40:10.184700] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:24.818 [2024-07-15 18:40:10.186424] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:24.818 [2024-07-15 18:40:10.186453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:24.818 [2024-07-15 18:40:10.186533] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:24.818 [2024-07-15 18:40:10.186561] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:24.818 [2024-07-15 18:40:10.186672] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:24.818 [2024-07-15 18:40:10.186747] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:24.818 spare 00:25:24.818 18:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:24.818 18:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:24.818 18:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:24.818 18:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:24.818 18:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:24.818 18:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:24.818 18:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:24.818 18:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:24.818 18:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:24.818 18:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:24.818 18:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:24.818 18:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:24.818 [2024-07-15 18:40:10.287068] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x277b7a0 00:25:24.818 [2024-07-15 18:40:10.287084] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:24.818 [2024-07-15 18:40:10.287290] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2772ee0 00:25:24.818 [2024-07-15 18:40:10.287447] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x277b7a0 00:25:24.818 [2024-07-15 18:40:10.287463] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x277b7a0 00:25:24.818 [2024-07-15 18:40:10.287572] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:25.078 18:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:25.078 "name": "raid_bdev1", 00:25:25.078 "uuid": "b6f56bfe-a3c3-411f-af85-fb68bf1b518d", 00:25:25.078 "strip_size_kb": 0, 00:25:25.078 "state": "online", 00:25:25.078 "raid_level": "raid1", 00:25:25.078 "superblock": true, 00:25:25.078 "num_base_bdevs": 4, 00:25:25.078 "num_base_bdevs_discovered": 3, 00:25:25.078 "num_base_bdevs_operational": 3, 00:25:25.078 "base_bdevs_list": [ 00:25:25.078 { 00:25:25.078 "name": "spare", 00:25:25.078 "uuid": "df369f28-51c5-5f4b-8d7a-259f36781801", 00:25:25.078 "is_configured": true, 00:25:25.078 "data_offset": 2048, 00:25:25.078 "data_size": 63488 00:25:25.078 }, 00:25:25.078 { 00:25:25.078 "name": null, 00:25:25.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.078 "is_configured": false, 00:25:25.078 "data_offset": 2048, 00:25:25.078 "data_size": 63488 00:25:25.078 }, 00:25:25.078 { 00:25:25.078 "name": "BaseBdev3", 00:25:25.078 "uuid": "2f6c355d-c814-5c78-9be6-8565703e361e", 00:25:25.078 "is_configured": true, 00:25:25.078 "data_offset": 2048, 00:25:25.078 "data_size": 63488 00:25:25.078 }, 00:25:25.078 { 00:25:25.078 "name": "BaseBdev4", 00:25:25.078 "uuid": "10b28d88-9505-5424-a20c-3893fa68850c", 00:25:25.078 "is_configured": true, 00:25:25.078 "data_offset": 2048, 00:25:25.078 "data_size": 63488 00:25:25.078 } 00:25:25.078 ] 00:25:25.078 }' 00:25:25.078 18:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:25.078 18:40:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:25.676 18:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:25.676 18:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:25.676 18:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:25.676 18:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:25.676 18:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:25.676 18:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.676 18:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:25.935 18:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:25.935 "name": "raid_bdev1", 00:25:25.935 "uuid": "b6f56bfe-a3c3-411f-af85-fb68bf1b518d", 00:25:25.935 "strip_size_kb": 0, 00:25:25.935 "state": "online", 00:25:25.935 "raid_level": "raid1", 00:25:25.935 "superblock": true, 00:25:25.935 "num_base_bdevs": 4, 00:25:25.935 "num_base_bdevs_discovered": 3, 00:25:25.935 "num_base_bdevs_operational": 3, 00:25:25.935 "base_bdevs_list": [ 00:25:25.935 { 00:25:25.935 "name": "spare", 00:25:25.935 "uuid": "df369f28-51c5-5f4b-8d7a-259f36781801", 00:25:25.935 "is_configured": true, 00:25:25.935 "data_offset": 2048, 00:25:25.935 "data_size": 63488 00:25:25.935 }, 00:25:25.935 { 00:25:25.935 "name": null, 00:25:25.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.935 "is_configured": false, 00:25:25.935 "data_offset": 2048, 00:25:25.935 "data_size": 63488 00:25:25.935 }, 00:25:25.935 { 00:25:25.935 "name": "BaseBdev3", 00:25:25.935 "uuid": "2f6c355d-c814-5c78-9be6-8565703e361e", 00:25:25.935 "is_configured": true, 00:25:25.935 "data_offset": 2048, 00:25:25.935 "data_size": 63488 00:25:25.935 }, 00:25:25.935 { 00:25:25.935 "name": "BaseBdev4", 00:25:25.935 "uuid": "10b28d88-9505-5424-a20c-3893fa68850c", 00:25:25.935 "is_configured": true, 00:25:25.935 "data_offset": 2048, 00:25:25.935 "data_size": 63488 00:25:25.935 } 00:25:25.935 ] 00:25:25.935 }' 00:25:25.935 18:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:25.935 18:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:25.935 18:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:25.935 18:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:25.935 18:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.935 18:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:26.193 18:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:25:26.193 18:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:26.452 [2024-07-15 18:40:11.861260] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:26.452 18:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:26.452 18:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:26.452 18:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:26.452 18:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:26.452 18:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:26.452 18:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:26.452 18:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:26.452 18:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:26.452 18:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:26.452 18:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:26.452 18:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.452 18:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:26.711 18:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:26.711 "name": "raid_bdev1", 00:25:26.711 "uuid": "b6f56bfe-a3c3-411f-af85-fb68bf1b518d", 00:25:26.711 "strip_size_kb": 0, 00:25:26.711 "state": "online", 00:25:26.711 "raid_level": "raid1", 00:25:26.711 "superblock": true, 00:25:26.711 "num_base_bdevs": 4, 00:25:26.711 "num_base_bdevs_discovered": 2, 00:25:26.711 "num_base_bdevs_operational": 2, 00:25:26.711 "base_bdevs_list": [ 00:25:26.711 { 00:25:26.711 "name": null, 00:25:26.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:26.711 "is_configured": false, 00:25:26.711 "data_offset": 2048, 00:25:26.711 "data_size": 63488 00:25:26.711 }, 00:25:26.711 { 00:25:26.711 "name": null, 00:25:26.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:26.711 "is_configured": false, 00:25:26.711 "data_offset": 2048, 00:25:26.711 "data_size": 63488 00:25:26.711 }, 00:25:26.711 { 00:25:26.711 "name": "BaseBdev3", 00:25:26.711 "uuid": "2f6c355d-c814-5c78-9be6-8565703e361e", 00:25:26.711 "is_configured": true, 00:25:26.711 "data_offset": 2048, 00:25:26.711 "data_size": 63488 00:25:26.711 }, 00:25:26.711 { 00:25:26.711 "name": "BaseBdev4", 00:25:26.711 "uuid": "10b28d88-9505-5424-a20c-3893fa68850c", 00:25:26.711 "is_configured": true, 00:25:26.711 "data_offset": 2048, 00:25:26.711 "data_size": 63488 00:25:26.711 } 00:25:26.711 ] 00:25:26.711 }' 00:25:26.711 18:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:26.711 18:40:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:27.279 18:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:27.538 [2024-07-15 18:40:12.976273] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:27.538 [2024-07-15 18:40:12.976419] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:25:27.538 [2024-07-15 18:40:12.976433] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:27.538 [2024-07-15 18:40:12.976457] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:27.538 [2024-07-15 18:40:12.980322] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x29217f0 00:25:27.538 [2024-07-15 18:40:12.981746] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:27.538 18:40:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:25:28.473 18:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:28.473 18:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:28.473 18:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:28.473 18:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:28.473 18:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:28.473 18:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.473 18:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:28.732 18:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:28.732 "name": "raid_bdev1", 00:25:28.732 "uuid": "b6f56bfe-a3c3-411f-af85-fb68bf1b518d", 00:25:28.732 "strip_size_kb": 0, 00:25:28.732 "state": "online", 00:25:28.732 "raid_level": "raid1", 00:25:28.732 "superblock": true, 00:25:28.732 "num_base_bdevs": 4, 00:25:28.732 "num_base_bdevs_discovered": 3, 00:25:28.732 "num_base_bdevs_operational": 3, 00:25:28.732 "process": { 00:25:28.732 "type": "rebuild", 00:25:28.732 "target": "spare", 00:25:28.732 "progress": { 00:25:28.732 "blocks": 22528, 00:25:28.732 "percent": 35 00:25:28.732 } 00:25:28.732 }, 00:25:28.732 "base_bdevs_list": [ 00:25:28.732 { 00:25:28.732 "name": "spare", 00:25:28.732 "uuid": "df369f28-51c5-5f4b-8d7a-259f36781801", 00:25:28.732 "is_configured": true, 00:25:28.732 "data_offset": 2048, 00:25:28.732 "data_size": 63488 00:25:28.732 }, 00:25:28.732 { 00:25:28.732 "name": null, 00:25:28.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.732 "is_configured": false, 00:25:28.732 "data_offset": 2048, 00:25:28.732 "data_size": 63488 00:25:28.732 }, 00:25:28.732 { 00:25:28.732 "name": "BaseBdev3", 00:25:28.732 "uuid": "2f6c355d-c814-5c78-9be6-8565703e361e", 00:25:28.732 "is_configured": true, 00:25:28.732 "data_offset": 2048, 00:25:28.732 "data_size": 63488 00:25:28.732 }, 00:25:28.732 { 00:25:28.732 "name": "BaseBdev4", 00:25:28.732 "uuid": "10b28d88-9505-5424-a20c-3893fa68850c", 00:25:28.732 "is_configured": true, 00:25:28.732 "data_offset": 2048, 00:25:28.732 "data_size": 63488 00:25:28.732 } 00:25:28.732 ] 00:25:28.732 }' 00:25:28.732 18:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:28.732 18:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:28.732 18:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:28.991 18:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:28.991 18:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:28.991 [2024-07-15 18:40:14.521663] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:29.250 [2024-07-15 18:40:14.593824] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:29.250 [2024-07-15 18:40:14.593864] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:29.250 [2024-07-15 18:40:14.593879] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:29.250 [2024-07-15 18:40:14.593885] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:29.250 18:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:29.250 18:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:29.250 18:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:29.250 18:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:29.250 18:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:29.250 18:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:29.250 18:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:29.250 18:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:29.250 18:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:29.250 18:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:29.250 18:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.250 18:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:29.509 18:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:29.509 "name": "raid_bdev1", 00:25:29.509 "uuid": "b6f56bfe-a3c3-411f-af85-fb68bf1b518d", 00:25:29.509 "strip_size_kb": 0, 00:25:29.509 "state": "online", 00:25:29.509 "raid_level": "raid1", 00:25:29.509 "superblock": true, 00:25:29.509 "num_base_bdevs": 4, 00:25:29.509 "num_base_bdevs_discovered": 2, 00:25:29.509 "num_base_bdevs_operational": 2, 00:25:29.509 "base_bdevs_list": [ 00:25:29.509 { 00:25:29.509 "name": null, 00:25:29.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:29.509 "is_configured": false, 00:25:29.509 "data_offset": 2048, 00:25:29.509 "data_size": 63488 00:25:29.509 }, 00:25:29.509 { 00:25:29.509 "name": null, 00:25:29.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:29.509 "is_configured": false, 00:25:29.509 "data_offset": 2048, 00:25:29.509 "data_size": 63488 00:25:29.509 }, 00:25:29.509 { 00:25:29.509 "name": "BaseBdev3", 00:25:29.510 "uuid": "2f6c355d-c814-5c78-9be6-8565703e361e", 00:25:29.510 "is_configured": true, 00:25:29.510 "data_offset": 2048, 00:25:29.510 "data_size": 63488 00:25:29.510 }, 00:25:29.510 { 00:25:29.510 "name": "BaseBdev4", 00:25:29.510 "uuid": "10b28d88-9505-5424-a20c-3893fa68850c", 00:25:29.510 "is_configured": true, 00:25:29.510 "data_offset": 2048, 00:25:29.510 "data_size": 63488 00:25:29.510 } 00:25:29.510 ] 00:25:29.510 }' 00:25:29.510 18:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:29.510 18:40:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.077 18:40:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:30.336 [2024-07-15 18:40:15.732920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:30.336 [2024-07-15 18:40:15.732976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:30.336 [2024-07-15 18:40:15.732997] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2812710 00:25:30.336 [2024-07-15 18:40:15.733007] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:30.336 [2024-07-15 18:40:15.733395] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:30.336 [2024-07-15 18:40:15.733411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:30.336 [2024-07-15 18:40:15.733490] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:30.336 [2024-07-15 18:40:15.733500] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:25:30.336 [2024-07-15 18:40:15.733508] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:30.336 [2024-07-15 18:40:15.733524] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:30.336 [2024-07-15 18:40:15.737393] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2812ff0 00:25:30.336 spare 00:25:30.336 [2024-07-15 18:40:15.738832] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:30.336 18:40:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:25:31.275 18:40:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:31.275 18:40:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:31.275 18:40:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:31.275 18:40:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:31.275 18:40:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:31.275 18:40:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:31.275 18:40:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:31.533 18:40:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:31.533 "name": "raid_bdev1", 00:25:31.534 "uuid": "b6f56bfe-a3c3-411f-af85-fb68bf1b518d", 00:25:31.534 "strip_size_kb": 0, 00:25:31.534 "state": "online", 00:25:31.534 "raid_level": "raid1", 00:25:31.534 "superblock": true, 00:25:31.534 "num_base_bdevs": 4, 00:25:31.534 "num_base_bdevs_discovered": 3, 00:25:31.534 "num_base_bdevs_operational": 3, 00:25:31.534 "process": { 00:25:31.534 "type": "rebuild", 00:25:31.534 "target": "spare", 00:25:31.534 "progress": { 00:25:31.534 "blocks": 24576, 00:25:31.534 "percent": 38 00:25:31.534 } 00:25:31.534 }, 00:25:31.534 "base_bdevs_list": [ 00:25:31.534 { 00:25:31.534 "name": "spare", 00:25:31.534 "uuid": "df369f28-51c5-5f4b-8d7a-259f36781801", 00:25:31.534 "is_configured": true, 00:25:31.534 "data_offset": 2048, 00:25:31.534 "data_size": 63488 00:25:31.534 }, 00:25:31.534 { 00:25:31.534 "name": null, 00:25:31.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:31.534 "is_configured": false, 00:25:31.534 "data_offset": 2048, 00:25:31.534 "data_size": 63488 00:25:31.534 }, 00:25:31.534 { 00:25:31.534 "name": "BaseBdev3", 00:25:31.534 "uuid": "2f6c355d-c814-5c78-9be6-8565703e361e", 00:25:31.534 "is_configured": true, 00:25:31.534 "data_offset": 2048, 00:25:31.534 "data_size": 63488 00:25:31.534 }, 00:25:31.534 { 00:25:31.534 "name": "BaseBdev4", 00:25:31.534 "uuid": "10b28d88-9505-5424-a20c-3893fa68850c", 00:25:31.534 "is_configured": true, 00:25:31.534 "data_offset": 2048, 00:25:31.534 "data_size": 63488 00:25:31.534 } 00:25:31.534 ] 00:25:31.534 }' 00:25:31.534 18:40:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:31.534 18:40:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:31.534 18:40:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:31.792 18:40:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:31.792 18:40:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:32.051 [2024-07-15 18:40:17.347015] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:32.051 [2024-07-15 18:40:17.350855] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:32.051 [2024-07-15 18:40:17.350893] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:32.051 [2024-07-15 18:40:17.350908] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:32.051 [2024-07-15 18:40:17.350914] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:32.051 18:40:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:32.051 18:40:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:32.051 18:40:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:32.051 18:40:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:32.051 18:40:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:32.051 18:40:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:32.051 18:40:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:32.051 18:40:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:32.051 18:40:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:32.051 18:40:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:32.051 18:40:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:32.051 18:40:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:32.311 18:40:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:32.311 "name": "raid_bdev1", 00:25:32.311 "uuid": "b6f56bfe-a3c3-411f-af85-fb68bf1b518d", 00:25:32.311 "strip_size_kb": 0, 00:25:32.311 "state": "online", 00:25:32.311 "raid_level": "raid1", 00:25:32.311 "superblock": true, 00:25:32.311 "num_base_bdevs": 4, 00:25:32.311 "num_base_bdevs_discovered": 2, 00:25:32.311 "num_base_bdevs_operational": 2, 00:25:32.311 "base_bdevs_list": [ 00:25:32.311 { 00:25:32.311 "name": null, 00:25:32.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:32.311 "is_configured": false, 00:25:32.311 "data_offset": 2048, 00:25:32.311 "data_size": 63488 00:25:32.311 }, 00:25:32.311 { 00:25:32.311 "name": null, 00:25:32.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:32.311 "is_configured": false, 00:25:32.311 "data_offset": 2048, 00:25:32.311 "data_size": 63488 00:25:32.311 }, 00:25:32.311 { 00:25:32.311 "name": "BaseBdev3", 00:25:32.311 "uuid": "2f6c355d-c814-5c78-9be6-8565703e361e", 00:25:32.311 "is_configured": true, 00:25:32.311 "data_offset": 2048, 00:25:32.311 "data_size": 63488 00:25:32.311 }, 00:25:32.311 { 00:25:32.311 "name": "BaseBdev4", 00:25:32.311 "uuid": "10b28d88-9505-5424-a20c-3893fa68850c", 00:25:32.311 "is_configured": true, 00:25:32.311 "data_offset": 2048, 00:25:32.311 "data_size": 63488 00:25:32.311 } 00:25:32.311 ] 00:25:32.311 }' 00:25:32.311 18:40:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:32.311 18:40:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:32.879 18:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:32.879 18:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:32.879 18:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:32.879 18:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:32.879 18:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:32.879 18:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:32.879 18:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.138 18:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:33.138 "name": "raid_bdev1", 00:25:33.138 "uuid": "b6f56bfe-a3c3-411f-af85-fb68bf1b518d", 00:25:33.138 "strip_size_kb": 0, 00:25:33.138 "state": "online", 00:25:33.138 "raid_level": "raid1", 00:25:33.138 "superblock": true, 00:25:33.138 "num_base_bdevs": 4, 00:25:33.138 "num_base_bdevs_discovered": 2, 00:25:33.138 "num_base_bdevs_operational": 2, 00:25:33.138 "base_bdevs_list": [ 00:25:33.138 { 00:25:33.138 "name": null, 00:25:33.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.138 "is_configured": false, 00:25:33.138 "data_offset": 2048, 00:25:33.138 "data_size": 63488 00:25:33.138 }, 00:25:33.138 { 00:25:33.138 "name": null, 00:25:33.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.138 "is_configured": false, 00:25:33.138 "data_offset": 2048, 00:25:33.138 "data_size": 63488 00:25:33.138 }, 00:25:33.138 { 00:25:33.138 "name": "BaseBdev3", 00:25:33.138 "uuid": "2f6c355d-c814-5c78-9be6-8565703e361e", 00:25:33.138 "is_configured": true, 00:25:33.138 "data_offset": 2048, 00:25:33.138 "data_size": 63488 00:25:33.138 }, 00:25:33.138 { 00:25:33.138 "name": "BaseBdev4", 00:25:33.138 "uuid": "10b28d88-9505-5424-a20c-3893fa68850c", 00:25:33.138 "is_configured": true, 00:25:33.138 "data_offset": 2048, 00:25:33.138 "data_size": 63488 00:25:33.138 } 00:25:33.138 ] 00:25:33.138 }' 00:25:33.138 18:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:33.138 18:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:33.138 18:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:33.138 18:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:33.138 18:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:25:33.397 18:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:33.962 [2024-07-15 18:40:19.312202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:33.962 [2024-07-15 18:40:19.312250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:33.962 [2024-07-15 18:40:19.312267] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x277c360 00:25:33.962 [2024-07-15 18:40:19.312277] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:33.962 [2024-07-15 18:40:19.312631] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:33.962 [2024-07-15 18:40:19.312646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:33.962 [2024-07-15 18:40:19.312709] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:33.963 [2024-07-15 18:40:19.312719] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:25:33.963 [2024-07-15 18:40:19.312733] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:33.963 BaseBdev1 00:25:33.963 18:40:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:25:34.894 18:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:34.894 18:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:34.894 18:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:34.894 18:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:34.894 18:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:34.894 18:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:34.894 18:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:34.894 18:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:34.894 18:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:34.894 18:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:34.894 18:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:34.894 18:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:35.169 18:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:35.169 "name": "raid_bdev1", 00:25:35.169 "uuid": "b6f56bfe-a3c3-411f-af85-fb68bf1b518d", 00:25:35.169 "strip_size_kb": 0, 00:25:35.169 "state": "online", 00:25:35.169 "raid_level": "raid1", 00:25:35.169 "superblock": true, 00:25:35.169 "num_base_bdevs": 4, 00:25:35.169 "num_base_bdevs_discovered": 2, 00:25:35.169 "num_base_bdevs_operational": 2, 00:25:35.169 "base_bdevs_list": [ 00:25:35.169 { 00:25:35.169 "name": null, 00:25:35.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.169 "is_configured": false, 00:25:35.169 "data_offset": 2048, 00:25:35.169 "data_size": 63488 00:25:35.169 }, 00:25:35.169 { 00:25:35.169 "name": null, 00:25:35.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.169 "is_configured": false, 00:25:35.169 "data_offset": 2048, 00:25:35.169 "data_size": 63488 00:25:35.169 }, 00:25:35.169 { 00:25:35.169 "name": "BaseBdev3", 00:25:35.169 "uuid": "2f6c355d-c814-5c78-9be6-8565703e361e", 00:25:35.169 "is_configured": true, 00:25:35.169 "data_offset": 2048, 00:25:35.169 "data_size": 63488 00:25:35.169 }, 00:25:35.169 { 00:25:35.169 "name": "BaseBdev4", 00:25:35.169 "uuid": "10b28d88-9505-5424-a20c-3893fa68850c", 00:25:35.169 "is_configured": true, 00:25:35.169 "data_offset": 2048, 00:25:35.169 "data_size": 63488 00:25:35.169 } 00:25:35.169 ] 00:25:35.169 }' 00:25:35.169 18:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:35.169 18:40:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:35.735 18:40:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:35.735 18:40:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:35.735 18:40:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:35.735 18:40:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:35.735 18:40:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:35.735 18:40:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.735 18:40:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:35.992 18:40:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:35.992 "name": "raid_bdev1", 00:25:35.992 "uuid": "b6f56bfe-a3c3-411f-af85-fb68bf1b518d", 00:25:35.992 "strip_size_kb": 0, 00:25:35.992 "state": "online", 00:25:35.992 "raid_level": "raid1", 00:25:35.992 "superblock": true, 00:25:35.992 "num_base_bdevs": 4, 00:25:35.992 "num_base_bdevs_discovered": 2, 00:25:35.992 "num_base_bdevs_operational": 2, 00:25:35.992 "base_bdevs_list": [ 00:25:35.992 { 00:25:35.992 "name": null, 00:25:35.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.992 "is_configured": false, 00:25:35.992 "data_offset": 2048, 00:25:35.992 "data_size": 63488 00:25:35.992 }, 00:25:35.992 { 00:25:35.993 "name": null, 00:25:35.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.993 "is_configured": false, 00:25:35.993 "data_offset": 2048, 00:25:35.993 "data_size": 63488 00:25:35.993 }, 00:25:35.993 { 00:25:35.993 "name": "BaseBdev3", 00:25:35.993 "uuid": "2f6c355d-c814-5c78-9be6-8565703e361e", 00:25:35.993 "is_configured": true, 00:25:35.993 "data_offset": 2048, 00:25:35.993 "data_size": 63488 00:25:35.993 }, 00:25:35.993 { 00:25:35.993 "name": "BaseBdev4", 00:25:35.993 "uuid": "10b28d88-9505-5424-a20c-3893fa68850c", 00:25:35.993 "is_configured": true, 00:25:35.993 "data_offset": 2048, 00:25:35.993 "data_size": 63488 00:25:35.993 } 00:25:35.993 ] 00:25:35.993 }' 00:25:35.993 18:40:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:35.993 18:40:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:35.993 18:40:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:36.250 18:40:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:36.250 18:40:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:36.250 18:40:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:25:36.250 18:40:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:36.250 18:40:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:36.250 18:40:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:36.250 18:40:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:36.250 18:40:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:36.250 18:40:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:36.250 18:40:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:36.250 18:40:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:36.250 18:40:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:25:36.250 18:40:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:36.509 [2024-07-15 18:40:21.831011] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:36.509 [2024-07-15 18:40:21.831136] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:25:36.509 [2024-07-15 18:40:21.831150] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:36.509 request: 00:25:36.509 { 00:25:36.509 "base_bdev": "BaseBdev1", 00:25:36.509 "raid_bdev": "raid_bdev1", 00:25:36.509 "method": "bdev_raid_add_base_bdev", 00:25:36.509 "req_id": 1 00:25:36.509 } 00:25:36.509 Got JSON-RPC error response 00:25:36.509 response: 00:25:36.509 { 00:25:36.509 "code": -22, 00:25:36.509 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:36.509 } 00:25:36.509 18:40:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:25:36.509 18:40:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:36.509 18:40:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:36.509 18:40:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:36.509 18:40:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:25:37.444 18:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:37.444 18:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:37.444 18:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:37.444 18:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:37.444 18:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:37.444 18:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:37.444 18:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:37.444 18:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:37.444 18:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:37.444 18:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:37.444 18:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:37.444 18:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:37.702 18:40:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:37.702 "name": "raid_bdev1", 00:25:37.702 "uuid": "b6f56bfe-a3c3-411f-af85-fb68bf1b518d", 00:25:37.702 "strip_size_kb": 0, 00:25:37.702 "state": "online", 00:25:37.702 "raid_level": "raid1", 00:25:37.702 "superblock": true, 00:25:37.702 "num_base_bdevs": 4, 00:25:37.702 "num_base_bdevs_discovered": 2, 00:25:37.702 "num_base_bdevs_operational": 2, 00:25:37.702 "base_bdevs_list": [ 00:25:37.702 { 00:25:37.702 "name": null, 00:25:37.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.702 "is_configured": false, 00:25:37.702 "data_offset": 2048, 00:25:37.702 "data_size": 63488 00:25:37.702 }, 00:25:37.702 { 00:25:37.702 "name": null, 00:25:37.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.702 "is_configured": false, 00:25:37.702 "data_offset": 2048, 00:25:37.702 "data_size": 63488 00:25:37.702 }, 00:25:37.702 { 00:25:37.702 "name": "BaseBdev3", 00:25:37.702 "uuid": "2f6c355d-c814-5c78-9be6-8565703e361e", 00:25:37.702 "is_configured": true, 00:25:37.702 "data_offset": 2048, 00:25:37.702 "data_size": 63488 00:25:37.702 }, 00:25:37.702 { 00:25:37.702 "name": "BaseBdev4", 00:25:37.702 "uuid": "10b28d88-9505-5424-a20c-3893fa68850c", 00:25:37.702 "is_configured": true, 00:25:37.702 "data_offset": 2048, 00:25:37.702 "data_size": 63488 00:25:37.702 } 00:25:37.702 ] 00:25:37.702 }' 00:25:37.702 18:40:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:37.702 18:40:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:38.265 18:40:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:38.265 18:40:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:38.265 18:40:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:38.265 18:40:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:38.265 18:40:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:38.265 18:40:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.265 18:40:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:38.523 18:40:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:38.523 "name": "raid_bdev1", 00:25:38.523 "uuid": "b6f56bfe-a3c3-411f-af85-fb68bf1b518d", 00:25:38.523 "strip_size_kb": 0, 00:25:38.523 "state": "online", 00:25:38.523 "raid_level": "raid1", 00:25:38.523 "superblock": true, 00:25:38.523 "num_base_bdevs": 4, 00:25:38.523 "num_base_bdevs_discovered": 2, 00:25:38.523 "num_base_bdevs_operational": 2, 00:25:38.523 "base_bdevs_list": [ 00:25:38.523 { 00:25:38.523 "name": null, 00:25:38.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.523 "is_configured": false, 00:25:38.523 "data_offset": 2048, 00:25:38.523 "data_size": 63488 00:25:38.523 }, 00:25:38.523 { 00:25:38.523 "name": null, 00:25:38.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.523 "is_configured": false, 00:25:38.523 "data_offset": 2048, 00:25:38.523 "data_size": 63488 00:25:38.523 }, 00:25:38.523 { 00:25:38.523 "name": "BaseBdev3", 00:25:38.523 "uuid": "2f6c355d-c814-5c78-9be6-8565703e361e", 00:25:38.523 "is_configured": true, 00:25:38.523 "data_offset": 2048, 00:25:38.523 "data_size": 63488 00:25:38.523 }, 00:25:38.523 { 00:25:38.523 "name": "BaseBdev4", 00:25:38.523 "uuid": "10b28d88-9505-5424-a20c-3893fa68850c", 00:25:38.523 "is_configured": true, 00:25:38.523 "data_offset": 2048, 00:25:38.523 "data_size": 63488 00:25:38.523 } 00:25:38.523 ] 00:25:38.523 }' 00:25:38.523 18:40:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:38.523 18:40:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:38.523 18:40:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:38.781 18:40:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:38.781 18:40:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 2912287 00:25:38.781 18:40:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2912287 ']' 00:25:38.781 18:40:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 2912287 00:25:38.781 18:40:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:25:38.781 18:40:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:38.781 18:40:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2912287 00:25:38.781 18:40:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:38.781 18:40:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:38.781 18:40:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2912287' 00:25:38.781 killing process with pid 2912287 00:25:38.781 18:40:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 2912287 00:25:38.781 Received shutdown signal, test time was about 60.000000 seconds 00:25:38.781 00:25:38.781 Latency(us) 00:25:38.781 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.781 =================================================================================================================== 00:25:38.781 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:38.781 [2024-07-15 18:40:24.140556] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:38.781 [2024-07-15 18:40:24.140651] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:38.781 [2024-07-15 18:40:24.140706] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:38.781 [2024-07-15 18:40:24.140716] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x277b7a0 name raid_bdev1, state offline 00:25:38.781 18:40:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 2912287 00:25:38.781 [2024-07-15 18:40:24.186253] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:39.040 18:40:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:25:39.040 00:25:39.040 real 0m39.024s 00:25:39.040 user 0m58.642s 00:25:39.040 sys 0m5.430s 00:25:39.040 18:40:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:39.040 18:40:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:39.040 ************************************ 00:25:39.040 END TEST raid_rebuild_test_sb 00:25:39.040 ************************************ 00:25:39.040 18:40:24 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:25:39.040 18:40:24 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:25:39.040 18:40:24 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:25:39.040 18:40:24 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:39.040 18:40:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:39.040 ************************************ 00:25:39.040 START TEST raid_rebuild_test_io 00:25:39.040 ************************************ 00:25:39.040 18:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 false true true 00:25:39.040 18:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:25:39.040 18:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:25:39.040 18:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:25:39.040 18:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:25:39.040 18:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:25:39.040 18:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:25:39.040 18:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:39.040 18:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:25:39.040 18:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:39.040 18:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:39.040 18:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:25:39.040 18:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:39.040 18:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:39.041 18:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:25:39.041 18:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:39.041 18:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:39.041 18:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:25:39.041 18:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:39.041 18:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:39.041 18:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:39.041 18:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:25:39.041 18:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:25:39.041 18:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:25:39.041 18:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:25:39.041 18:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:25:39.041 18:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:25:39.041 18:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:25:39.041 18:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:25:39.041 18:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:25:39.041 18:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=2918689 00:25:39.041 18:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 2918689 /var/tmp/spdk-raid.sock 00:25:39.041 18:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:39.041 18:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@829 -- # '[' -z 2918689 ']' 00:25:39.041 18:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:39.041 18:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:39.041 18:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:39.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:39.041 18:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:39.041 18:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:39.041 [2024-07-15 18:40:24.494783] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:25:39.041 [2024-07-15 18:40:24.494858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2918689 ] 00:25:39.041 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:39.041 Zero copy mechanism will not be used. 00:25:39.299 [2024-07-15 18:40:24.594663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.299 [2024-07-15 18:40:24.689860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.299 [2024-07-15 18:40:24.744971] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:39.299 [2024-07-15 18:40:24.745002] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:40.254 18:40:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:40.254 18:40:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # return 0 00:25:40.254 18:40:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:40.254 18:40:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:40.254 BaseBdev1_malloc 00:25:40.254 18:40:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:40.539 [2024-07-15 18:40:25.937523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:40.539 [2024-07-15 18:40:25.937568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:40.539 [2024-07-15 18:40:25.937589] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e6a130 00:25:40.539 [2024-07-15 18:40:25.937598] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:40.539 [2024-07-15 18:40:25.939317] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:40.539 [2024-07-15 18:40:25.939344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:40.539 BaseBdev1 00:25:40.539 18:40:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:40.539 18:40:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:41.133 BaseBdev2_malloc 00:25:41.133 18:40:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:41.391 [2024-07-15 18:40:26.696241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:41.391 [2024-07-15 18:40:26.696288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:41.391 [2024-07-15 18:40:26.696305] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x200ffa0 00:25:41.391 [2024-07-15 18:40:26.696314] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:41.391 [2024-07-15 18:40:26.697918] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:41.391 [2024-07-15 18:40:26.697944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:41.391 BaseBdev2 00:25:41.391 18:40:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:41.391 18:40:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:41.649 BaseBdev3_malloc 00:25:41.907 18:40:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:41.907 [2024-07-15 18:40:27.450831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:41.907 [2024-07-15 18:40:27.450874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:41.907 [2024-07-15 18:40:27.450891] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x201b970 00:25:41.907 [2024-07-15 18:40:27.450900] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:41.907 [2024-07-15 18:40:27.452510] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:41.907 [2024-07-15 18:40:27.452535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:41.907 BaseBdev3 00:25:42.165 18:40:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:42.165 18:40:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:42.165 BaseBdev4_malloc 00:25:42.423 18:40:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:42.681 [2024-07-15 18:40:28.185240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:42.681 [2024-07-15 18:40:28.185284] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:42.681 [2024-07-15 18:40:28.185305] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20128c0 00:25:42.681 [2024-07-15 18:40:28.185314] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:42.681 [2024-07-15 18:40:28.186906] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:42.681 [2024-07-15 18:40:28.186932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:42.681 BaseBdev4 00:25:42.681 18:40:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:42.939 spare_malloc 00:25:42.939 18:40:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:43.504 spare_delay 00:25:43.504 18:40:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:44.071 [2024-07-15 18:40:29.461062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:44.071 [2024-07-15 18:40:29.461103] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:44.071 [2024-07-15 18:40:29.461122] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e62bf0 00:25:44.071 [2024-07-15 18:40:29.461131] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:44.071 [2024-07-15 18:40:29.462744] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:44.071 [2024-07-15 18:40:29.462769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:44.071 spare 00:25:44.071 18:40:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:44.330 [2024-07-15 18:40:29.725790] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:44.330 [2024-07-15 18:40:29.727172] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:44.330 [2024-07-15 18:40:29.727229] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:44.330 [2024-07-15 18:40:29.727275] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:44.330 [2024-07-15 18:40:29.727359] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1e64990 00:25:44.330 [2024-07-15 18:40:29.727368] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:44.330 [2024-07-15 18:40:29.727587] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e681f0 00:25:44.330 [2024-07-15 18:40:29.727743] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1e64990 00:25:44.330 [2024-07-15 18:40:29.727752] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1e64990 00:25:44.330 [2024-07-15 18:40:29.727873] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:44.330 18:40:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:25:44.330 18:40:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:44.330 18:40:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:44.330 18:40:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:44.330 18:40:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:44.330 18:40:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:44.330 18:40:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:44.330 18:40:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:44.330 18:40:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:44.330 18:40:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:44.330 18:40:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:44.330 18:40:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.590 18:40:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:44.590 "name": "raid_bdev1", 00:25:44.590 "uuid": "a090b1a8-ff96-44c7-96e3-8c993650c73e", 00:25:44.590 "strip_size_kb": 0, 00:25:44.590 "state": "online", 00:25:44.590 "raid_level": "raid1", 00:25:44.590 "superblock": false, 00:25:44.590 "num_base_bdevs": 4, 00:25:44.590 "num_base_bdevs_discovered": 4, 00:25:44.590 "num_base_bdevs_operational": 4, 00:25:44.590 "base_bdevs_list": [ 00:25:44.590 { 00:25:44.590 "name": "BaseBdev1", 00:25:44.590 "uuid": "6e7094b5-210a-5186-982c-cc0404b8c485", 00:25:44.590 "is_configured": true, 00:25:44.590 "data_offset": 0, 00:25:44.590 "data_size": 65536 00:25:44.590 }, 00:25:44.590 { 00:25:44.590 "name": "BaseBdev2", 00:25:44.590 "uuid": "774784a7-0faa-5498-9e1b-691c035d77a9", 00:25:44.590 "is_configured": true, 00:25:44.590 "data_offset": 0, 00:25:44.590 "data_size": 65536 00:25:44.590 }, 00:25:44.590 { 00:25:44.590 "name": "BaseBdev3", 00:25:44.590 "uuid": "9a5a2a5d-d386-586a-b3d3-c155879990ec", 00:25:44.590 "is_configured": true, 00:25:44.590 "data_offset": 0, 00:25:44.590 "data_size": 65536 00:25:44.590 }, 00:25:44.590 { 00:25:44.590 "name": "BaseBdev4", 00:25:44.590 "uuid": "f3a8acf5-024b-5d2a-bb00-024a412fce50", 00:25:44.590 "is_configured": true, 00:25:44.590 "data_offset": 0, 00:25:44.590 "data_size": 65536 00:25:44.590 } 00:25:44.590 ] 00:25:44.590 }' 00:25:44.590 18:40:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:44.590 18:40:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:45.157 18:40:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:45.157 18:40:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:25:45.416 [2024-07-15 18:40:30.853133] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:45.416 18:40:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:25:45.416 18:40:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.416 18:40:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:45.674 18:40:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:25:45.674 18:40:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:25:45.674 18:40:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:45.674 18:40:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:25:45.933 [2024-07-15 18:40:31.336099] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e61500 00:25:45.933 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:45.933 Zero copy mechanism will not be used. 00:25:45.933 Running I/O for 60 seconds... 00:25:45.933 [2024-07-15 18:40:31.371751] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:45.933 [2024-07-15 18:40:31.380641] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1e61500 00:25:45.933 18:40:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:45.933 18:40:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:45.933 18:40:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:45.933 18:40:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:45.933 18:40:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:45.933 18:40:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:45.933 18:40:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:45.933 18:40:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:45.933 18:40:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:45.933 18:40:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:45.933 18:40:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.933 18:40:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.192 18:40:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:46.192 "name": "raid_bdev1", 00:25:46.192 "uuid": "a090b1a8-ff96-44c7-96e3-8c993650c73e", 00:25:46.192 "strip_size_kb": 0, 00:25:46.192 "state": "online", 00:25:46.192 "raid_level": "raid1", 00:25:46.192 "superblock": false, 00:25:46.192 "num_base_bdevs": 4, 00:25:46.192 "num_base_bdevs_discovered": 3, 00:25:46.192 "num_base_bdevs_operational": 3, 00:25:46.192 "base_bdevs_list": [ 00:25:46.192 { 00:25:46.192 "name": null, 00:25:46.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:46.192 "is_configured": false, 00:25:46.192 "data_offset": 0, 00:25:46.192 "data_size": 65536 00:25:46.192 }, 00:25:46.192 { 00:25:46.192 "name": "BaseBdev2", 00:25:46.192 "uuid": "774784a7-0faa-5498-9e1b-691c035d77a9", 00:25:46.192 "is_configured": true, 00:25:46.192 "data_offset": 0, 00:25:46.192 "data_size": 65536 00:25:46.192 }, 00:25:46.192 { 00:25:46.192 "name": "BaseBdev3", 00:25:46.192 "uuid": "9a5a2a5d-d386-586a-b3d3-c155879990ec", 00:25:46.192 "is_configured": true, 00:25:46.192 "data_offset": 0, 00:25:46.192 "data_size": 65536 00:25:46.192 }, 00:25:46.192 { 00:25:46.192 "name": "BaseBdev4", 00:25:46.192 "uuid": "f3a8acf5-024b-5d2a-bb00-024a412fce50", 00:25:46.192 "is_configured": true, 00:25:46.192 "data_offset": 0, 00:25:46.192 "data_size": 65536 00:25:46.192 } 00:25:46.192 ] 00:25:46.192 }' 00:25:46.192 18:40:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:46.192 18:40:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:47.129 18:40:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:47.129 [2024-07-15 18:40:32.633329] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:47.389 18:40:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:25:47.389 [2024-07-15 18:40:32.702564] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e61770 00:25:47.389 [2024-07-15 18:40:32.704757] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:47.389 [2024-07-15 18:40:32.831853] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:47.389 [2024-07-15 18:40:32.832347] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:47.647 [2024-07-15 18:40:33.056838] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:47.647 [2024-07-15 18:40:33.057004] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:47.922 [2024-07-15 18:40:33.426658] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:47.922 [2024-07-15 18:40:33.426956] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:48.180 [2024-07-15 18:40:33.550981] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:48.180 [2024-07-15 18:40:33.551575] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:48.180 18:40:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:48.180 18:40:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:48.180 18:40:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:48.180 18:40:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:48.180 18:40:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:48.180 18:40:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.180 18:40:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.438 [2024-07-15 18:40:33.905659] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:25:48.438 18:40:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:48.438 "name": "raid_bdev1", 00:25:48.438 "uuid": "a090b1a8-ff96-44c7-96e3-8c993650c73e", 00:25:48.438 "strip_size_kb": 0, 00:25:48.438 "state": "online", 00:25:48.438 "raid_level": "raid1", 00:25:48.438 "superblock": false, 00:25:48.438 "num_base_bdevs": 4, 00:25:48.438 "num_base_bdevs_discovered": 4, 00:25:48.438 "num_base_bdevs_operational": 4, 00:25:48.438 "process": { 00:25:48.438 "type": "rebuild", 00:25:48.438 "target": "spare", 00:25:48.438 "progress": { 00:25:48.438 "blocks": 14336, 00:25:48.438 "percent": 21 00:25:48.438 } 00:25:48.438 }, 00:25:48.438 "base_bdevs_list": [ 00:25:48.438 { 00:25:48.438 "name": "spare", 00:25:48.438 "uuid": "96714195-2cff-58db-adcd-d7d9bf687827", 00:25:48.438 "is_configured": true, 00:25:48.438 "data_offset": 0, 00:25:48.438 "data_size": 65536 00:25:48.438 }, 00:25:48.438 { 00:25:48.438 "name": "BaseBdev2", 00:25:48.438 "uuid": "774784a7-0faa-5498-9e1b-691c035d77a9", 00:25:48.438 "is_configured": true, 00:25:48.438 "data_offset": 0, 00:25:48.438 "data_size": 65536 00:25:48.438 }, 00:25:48.438 { 00:25:48.438 "name": "BaseBdev3", 00:25:48.438 "uuid": "9a5a2a5d-d386-586a-b3d3-c155879990ec", 00:25:48.438 "is_configured": true, 00:25:48.438 "data_offset": 0, 00:25:48.438 "data_size": 65536 00:25:48.438 }, 00:25:48.438 { 00:25:48.438 "name": "BaseBdev4", 00:25:48.438 "uuid": "f3a8acf5-024b-5d2a-bb00-024a412fce50", 00:25:48.438 "is_configured": true, 00:25:48.438 "data_offset": 0, 00:25:48.438 "data_size": 65536 00:25:48.438 } 00:25:48.438 ] 00:25:48.438 }' 00:25:48.439 18:40:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:48.697 18:40:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:48.697 18:40:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:48.697 18:40:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:48.697 18:40:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:48.697 [2024-07-15 18:40:34.151103] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:48.697 [2024-07-15 18:40:34.216492] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:48.955 [2024-07-15 18:40:34.315668] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:48.955 [2024-07-15 18:40:34.328004] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:48.955 [2024-07-15 18:40:34.328035] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:48.955 [2024-07-15 18:40:34.328044] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:48.955 [2024-07-15 18:40:34.361158] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1e61500 00:25:48.955 18:40:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:48.955 18:40:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:48.955 18:40:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:48.955 18:40:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:48.955 18:40:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:48.955 18:40:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:48.955 18:40:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:48.955 18:40:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:48.955 18:40:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:48.955 18:40:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:48.955 18:40:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.955 18:40:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:49.214 18:40:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:49.214 "name": "raid_bdev1", 00:25:49.214 "uuid": "a090b1a8-ff96-44c7-96e3-8c993650c73e", 00:25:49.214 "strip_size_kb": 0, 00:25:49.214 "state": "online", 00:25:49.214 "raid_level": "raid1", 00:25:49.214 "superblock": false, 00:25:49.214 "num_base_bdevs": 4, 00:25:49.214 "num_base_bdevs_discovered": 3, 00:25:49.214 "num_base_bdevs_operational": 3, 00:25:49.214 "base_bdevs_list": [ 00:25:49.214 { 00:25:49.214 "name": null, 00:25:49.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:49.214 "is_configured": false, 00:25:49.214 "data_offset": 0, 00:25:49.214 "data_size": 65536 00:25:49.214 }, 00:25:49.214 { 00:25:49.214 "name": "BaseBdev2", 00:25:49.214 "uuid": "774784a7-0faa-5498-9e1b-691c035d77a9", 00:25:49.214 "is_configured": true, 00:25:49.214 "data_offset": 0, 00:25:49.214 "data_size": 65536 00:25:49.214 }, 00:25:49.214 { 00:25:49.214 "name": "BaseBdev3", 00:25:49.214 "uuid": "9a5a2a5d-d386-586a-b3d3-c155879990ec", 00:25:49.214 "is_configured": true, 00:25:49.214 "data_offset": 0, 00:25:49.214 "data_size": 65536 00:25:49.214 }, 00:25:49.214 { 00:25:49.214 "name": "BaseBdev4", 00:25:49.214 "uuid": "f3a8acf5-024b-5d2a-bb00-024a412fce50", 00:25:49.214 "is_configured": true, 00:25:49.214 "data_offset": 0, 00:25:49.214 "data_size": 65536 00:25:49.214 } 00:25:49.214 ] 00:25:49.214 }' 00:25:49.214 18:40:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:49.214 18:40:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:50.150 18:40:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:50.150 18:40:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:50.150 18:40:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:50.150 18:40:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:50.150 18:40:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:50.150 18:40:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.150 18:40:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:50.150 18:40:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:50.150 "name": "raid_bdev1", 00:25:50.150 "uuid": "a090b1a8-ff96-44c7-96e3-8c993650c73e", 00:25:50.150 "strip_size_kb": 0, 00:25:50.150 "state": "online", 00:25:50.150 "raid_level": "raid1", 00:25:50.150 "superblock": false, 00:25:50.150 "num_base_bdevs": 4, 00:25:50.150 "num_base_bdevs_discovered": 3, 00:25:50.150 "num_base_bdevs_operational": 3, 00:25:50.150 "base_bdevs_list": [ 00:25:50.150 { 00:25:50.150 "name": null, 00:25:50.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.150 "is_configured": false, 00:25:50.150 "data_offset": 0, 00:25:50.150 "data_size": 65536 00:25:50.150 }, 00:25:50.150 { 00:25:50.150 "name": "BaseBdev2", 00:25:50.150 "uuid": "774784a7-0faa-5498-9e1b-691c035d77a9", 00:25:50.150 "is_configured": true, 00:25:50.150 "data_offset": 0, 00:25:50.150 "data_size": 65536 00:25:50.150 }, 00:25:50.150 { 00:25:50.150 "name": "BaseBdev3", 00:25:50.150 "uuid": "9a5a2a5d-d386-586a-b3d3-c155879990ec", 00:25:50.150 "is_configured": true, 00:25:50.150 "data_offset": 0, 00:25:50.150 "data_size": 65536 00:25:50.150 }, 00:25:50.150 { 00:25:50.150 "name": "BaseBdev4", 00:25:50.150 "uuid": "f3a8acf5-024b-5d2a-bb00-024a412fce50", 00:25:50.150 "is_configured": true, 00:25:50.150 "data_offset": 0, 00:25:50.150 "data_size": 65536 00:25:50.150 } 00:25:50.150 ] 00:25:50.150 }' 00:25:50.150 18:40:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:50.150 18:40:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:50.150 18:40:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:50.409 18:40:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:50.409 18:40:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:50.409 [2024-07-15 18:40:35.952664] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:50.668 18:40:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:50.668 [2024-07-15 18:40:36.071149] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e65020 00:25:50.668 [2024-07-15 18:40:36.072703] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:50.668 [2024-07-15 18:40:36.205955] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:50.668 [2024-07-15 18:40:36.206235] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:50.927 [2024-07-15 18:40:36.433589] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:50.927 [2024-07-15 18:40:36.434155] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:51.495 [2024-07-15 18:40:36.923316] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:51.495 18:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:51.495 18:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:51.495 18:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:51.495 18:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:51.495 18:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:51.495 18:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:51.495 18:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:51.753 [2024-07-15 18:40:37.281807] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:52.012 18:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:52.012 "name": "raid_bdev1", 00:25:52.012 "uuid": "a090b1a8-ff96-44c7-96e3-8c993650c73e", 00:25:52.013 "strip_size_kb": 0, 00:25:52.013 "state": "online", 00:25:52.013 "raid_level": "raid1", 00:25:52.013 "superblock": false, 00:25:52.013 "num_base_bdevs": 4, 00:25:52.013 "num_base_bdevs_discovered": 4, 00:25:52.013 "num_base_bdevs_operational": 4, 00:25:52.013 "process": { 00:25:52.013 "type": "rebuild", 00:25:52.013 "target": "spare", 00:25:52.013 "progress": { 00:25:52.013 "blocks": 14336, 00:25:52.013 "percent": 21 00:25:52.013 } 00:25:52.013 }, 00:25:52.013 "base_bdevs_list": [ 00:25:52.013 { 00:25:52.013 "name": "spare", 00:25:52.013 "uuid": "96714195-2cff-58db-adcd-d7d9bf687827", 00:25:52.013 "is_configured": true, 00:25:52.013 "data_offset": 0, 00:25:52.013 "data_size": 65536 00:25:52.013 }, 00:25:52.013 { 00:25:52.013 "name": "BaseBdev2", 00:25:52.013 "uuid": "774784a7-0faa-5498-9e1b-691c035d77a9", 00:25:52.013 "is_configured": true, 00:25:52.013 "data_offset": 0, 00:25:52.013 "data_size": 65536 00:25:52.013 }, 00:25:52.013 { 00:25:52.013 "name": "BaseBdev3", 00:25:52.013 "uuid": "9a5a2a5d-d386-586a-b3d3-c155879990ec", 00:25:52.013 "is_configured": true, 00:25:52.013 "data_offset": 0, 00:25:52.013 "data_size": 65536 00:25:52.013 }, 00:25:52.013 { 00:25:52.013 "name": "BaseBdev4", 00:25:52.013 "uuid": "f3a8acf5-024b-5d2a-bb00-024a412fce50", 00:25:52.013 "is_configured": true, 00:25:52.013 "data_offset": 0, 00:25:52.013 "data_size": 65536 00:25:52.013 } 00:25:52.013 ] 00:25:52.013 }' 00:25:52.013 18:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:52.013 18:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:52.013 18:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:52.013 18:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:52.013 18:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:25:52.013 18:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:25:52.013 18:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:25:52.013 18:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:25:52.013 18:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:52.272 [2024-07-15 18:40:37.643884] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:52.272 [2024-07-15 18:40:37.652833] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:25:52.272 [2024-07-15 18:40:37.716027] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x1e61500 00:25:52.272 [2024-07-15 18:40:37.716052] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x1e65020 00:25:52.272 18:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:25:52.272 18:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:25:52.272 18:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:52.272 18:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:52.272 18:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:52.272 18:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:52.272 18:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:52.272 18:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.272 18:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:52.531 [2024-07-15 18:40:37.869010] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:25:52.531 18:40:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:52.531 "name": "raid_bdev1", 00:25:52.531 "uuid": "a090b1a8-ff96-44c7-96e3-8c993650c73e", 00:25:52.531 "strip_size_kb": 0, 00:25:52.531 "state": "online", 00:25:52.531 "raid_level": "raid1", 00:25:52.531 "superblock": false, 00:25:52.531 "num_base_bdevs": 4, 00:25:52.531 "num_base_bdevs_discovered": 3, 00:25:52.531 "num_base_bdevs_operational": 3, 00:25:52.531 "process": { 00:25:52.531 "type": "rebuild", 00:25:52.531 "target": "spare", 00:25:52.531 "progress": { 00:25:52.531 "blocks": 22528, 00:25:52.531 "percent": 34 00:25:52.531 } 00:25:52.531 }, 00:25:52.531 "base_bdevs_list": [ 00:25:52.531 { 00:25:52.531 "name": "spare", 00:25:52.531 "uuid": "96714195-2cff-58db-adcd-d7d9bf687827", 00:25:52.531 "is_configured": true, 00:25:52.531 "data_offset": 0, 00:25:52.531 "data_size": 65536 00:25:52.531 }, 00:25:52.531 { 00:25:52.531 "name": null, 00:25:52.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:52.531 "is_configured": false, 00:25:52.531 "data_offset": 0, 00:25:52.531 "data_size": 65536 00:25:52.531 }, 00:25:52.531 { 00:25:52.531 "name": "BaseBdev3", 00:25:52.531 "uuid": "9a5a2a5d-d386-586a-b3d3-c155879990ec", 00:25:52.531 "is_configured": true, 00:25:52.531 "data_offset": 0, 00:25:52.531 "data_size": 65536 00:25:52.531 }, 00:25:52.531 { 00:25:52.531 "name": "BaseBdev4", 00:25:52.531 "uuid": "f3a8acf5-024b-5d2a-bb00-024a412fce50", 00:25:52.531 "is_configured": true, 00:25:52.531 "data_offset": 0, 00:25:52.531 "data_size": 65536 00:25:52.531 } 00:25:52.531 ] 00:25:52.531 }' 00:25:52.531 18:40:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:52.531 18:40:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:52.790 18:40:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:52.790 18:40:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:52.790 18:40:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=997 00:25:52.790 18:40:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:25:52.790 18:40:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:52.790 18:40:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:52.790 18:40:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:52.790 18:40:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:52.790 18:40:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:52.790 [2024-07-15 18:40:38.133021] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:25:52.790 18:40:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.790 18:40:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.050 18:40:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:53.050 "name": "raid_bdev1", 00:25:53.050 "uuid": "a090b1a8-ff96-44c7-96e3-8c993650c73e", 00:25:53.050 "strip_size_kb": 0, 00:25:53.050 "state": "online", 00:25:53.050 "raid_level": "raid1", 00:25:53.050 "superblock": false, 00:25:53.050 "num_base_bdevs": 4, 00:25:53.050 "num_base_bdevs_discovered": 3, 00:25:53.050 "num_base_bdevs_operational": 3, 00:25:53.050 "process": { 00:25:53.050 "type": "rebuild", 00:25:53.050 "target": "spare", 00:25:53.050 "progress": { 00:25:53.050 "blocks": 28672, 00:25:53.050 "percent": 43 00:25:53.050 } 00:25:53.050 }, 00:25:53.050 "base_bdevs_list": [ 00:25:53.050 { 00:25:53.050 "name": "spare", 00:25:53.050 "uuid": "96714195-2cff-58db-adcd-d7d9bf687827", 00:25:53.050 "is_configured": true, 00:25:53.050 "data_offset": 0, 00:25:53.050 "data_size": 65536 00:25:53.050 }, 00:25:53.050 { 00:25:53.050 "name": null, 00:25:53.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.050 "is_configured": false, 00:25:53.050 "data_offset": 0, 00:25:53.050 "data_size": 65536 00:25:53.050 }, 00:25:53.050 { 00:25:53.050 "name": "BaseBdev3", 00:25:53.050 "uuid": "9a5a2a5d-d386-586a-b3d3-c155879990ec", 00:25:53.050 "is_configured": true, 00:25:53.050 "data_offset": 0, 00:25:53.050 "data_size": 65536 00:25:53.050 }, 00:25:53.050 { 00:25:53.050 "name": "BaseBdev4", 00:25:53.050 "uuid": "f3a8acf5-024b-5d2a-bb00-024a412fce50", 00:25:53.050 "is_configured": true, 00:25:53.050 "data_offset": 0, 00:25:53.050 "data_size": 65536 00:25:53.050 } 00:25:53.050 ] 00:25:53.050 }' 00:25:53.050 18:40:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:53.050 18:40:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:53.050 18:40:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:53.050 [2024-07-15 18:40:38.502479] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:25:53.050 [2024-07-15 18:40:38.503257] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:25:53.050 18:40:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:53.050 18:40:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:25:53.309 [2024-07-15 18:40:38.741001] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:25:54.262 18:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:25:54.262 18:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:54.262 18:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:54.262 18:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:54.262 18:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:54.262 18:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:54.262 18:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.262 18:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:54.262 18:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:54.262 "name": "raid_bdev1", 00:25:54.262 "uuid": "a090b1a8-ff96-44c7-96e3-8c993650c73e", 00:25:54.262 "strip_size_kb": 0, 00:25:54.262 "state": "online", 00:25:54.262 "raid_level": "raid1", 00:25:54.262 "superblock": false, 00:25:54.262 "num_base_bdevs": 4, 00:25:54.262 "num_base_bdevs_discovered": 3, 00:25:54.262 "num_base_bdevs_operational": 3, 00:25:54.262 "process": { 00:25:54.262 "type": "rebuild", 00:25:54.262 "target": "spare", 00:25:54.262 "progress": { 00:25:54.262 "blocks": 51200, 00:25:54.262 "percent": 78 00:25:54.262 } 00:25:54.262 }, 00:25:54.262 "base_bdevs_list": [ 00:25:54.262 { 00:25:54.262 "name": "spare", 00:25:54.262 "uuid": "96714195-2cff-58db-adcd-d7d9bf687827", 00:25:54.262 "is_configured": true, 00:25:54.262 "data_offset": 0, 00:25:54.262 "data_size": 65536 00:25:54.262 }, 00:25:54.262 { 00:25:54.262 "name": null, 00:25:54.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:54.262 "is_configured": false, 00:25:54.262 "data_offset": 0, 00:25:54.262 "data_size": 65536 00:25:54.262 }, 00:25:54.262 { 00:25:54.262 "name": "BaseBdev3", 00:25:54.262 "uuid": "9a5a2a5d-d386-586a-b3d3-c155879990ec", 00:25:54.262 "is_configured": true, 00:25:54.262 "data_offset": 0, 00:25:54.262 "data_size": 65536 00:25:54.262 }, 00:25:54.262 { 00:25:54.262 "name": "BaseBdev4", 00:25:54.262 "uuid": "f3a8acf5-024b-5d2a-bb00-024a412fce50", 00:25:54.262 "is_configured": true, 00:25:54.262 "data_offset": 0, 00:25:54.262 "data_size": 65536 00:25:54.262 } 00:25:54.262 ] 00:25:54.262 }' 00:25:54.262 18:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:54.520 18:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:54.520 18:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:54.520 18:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:54.520 18:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:25:54.777 [2024-07-15 18:40:40.156297] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:25:55.070 [2024-07-15 18:40:40.491009] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:55.356 [2024-07-15 18:40:40.602221] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:55.356 [2024-07-15 18:40:40.604251] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:55.356 18:40:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:25:55.356 18:40:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:55.356 18:40:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:55.356 18:40:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:55.356 18:40:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:55.356 18:40:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:55.356 18:40:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.356 18:40:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:55.613 18:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:55.613 "name": "raid_bdev1", 00:25:55.613 "uuid": "a090b1a8-ff96-44c7-96e3-8c993650c73e", 00:25:55.613 "strip_size_kb": 0, 00:25:55.613 "state": "online", 00:25:55.613 "raid_level": "raid1", 00:25:55.613 "superblock": false, 00:25:55.613 "num_base_bdevs": 4, 00:25:55.613 "num_base_bdevs_discovered": 3, 00:25:55.613 "num_base_bdevs_operational": 3, 00:25:55.613 "base_bdevs_list": [ 00:25:55.613 { 00:25:55.613 "name": "spare", 00:25:55.613 "uuid": "96714195-2cff-58db-adcd-d7d9bf687827", 00:25:55.613 "is_configured": true, 00:25:55.613 "data_offset": 0, 00:25:55.613 "data_size": 65536 00:25:55.613 }, 00:25:55.613 { 00:25:55.613 "name": null, 00:25:55.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.613 "is_configured": false, 00:25:55.613 "data_offset": 0, 00:25:55.613 "data_size": 65536 00:25:55.613 }, 00:25:55.613 { 00:25:55.613 "name": "BaseBdev3", 00:25:55.613 "uuid": "9a5a2a5d-d386-586a-b3d3-c155879990ec", 00:25:55.613 "is_configured": true, 00:25:55.613 "data_offset": 0, 00:25:55.613 "data_size": 65536 00:25:55.613 }, 00:25:55.613 { 00:25:55.613 "name": "BaseBdev4", 00:25:55.613 "uuid": "f3a8acf5-024b-5d2a-bb00-024a412fce50", 00:25:55.613 "is_configured": true, 00:25:55.613 "data_offset": 0, 00:25:55.613 "data_size": 65536 00:25:55.613 } 00:25:55.613 ] 00:25:55.613 }' 00:25:55.613 18:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:55.870 18:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:55.870 18:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:55.870 18:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:25:55.870 18:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:25:55.870 18:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:55.870 18:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:55.870 18:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:55.870 18:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:55.870 18:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:55.870 18:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.870 18:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:56.127 18:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:56.127 "name": "raid_bdev1", 00:25:56.127 "uuid": "a090b1a8-ff96-44c7-96e3-8c993650c73e", 00:25:56.127 "strip_size_kb": 0, 00:25:56.127 "state": "online", 00:25:56.127 "raid_level": "raid1", 00:25:56.127 "superblock": false, 00:25:56.127 "num_base_bdevs": 4, 00:25:56.127 "num_base_bdevs_discovered": 3, 00:25:56.127 "num_base_bdevs_operational": 3, 00:25:56.127 "base_bdevs_list": [ 00:25:56.127 { 00:25:56.127 "name": "spare", 00:25:56.127 "uuid": "96714195-2cff-58db-adcd-d7d9bf687827", 00:25:56.127 "is_configured": true, 00:25:56.127 "data_offset": 0, 00:25:56.127 "data_size": 65536 00:25:56.127 }, 00:25:56.127 { 00:25:56.127 "name": null, 00:25:56.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:56.127 "is_configured": false, 00:25:56.127 "data_offset": 0, 00:25:56.127 "data_size": 65536 00:25:56.127 }, 00:25:56.127 { 00:25:56.127 "name": "BaseBdev3", 00:25:56.127 "uuid": "9a5a2a5d-d386-586a-b3d3-c155879990ec", 00:25:56.127 "is_configured": true, 00:25:56.127 "data_offset": 0, 00:25:56.127 "data_size": 65536 00:25:56.127 }, 00:25:56.127 { 00:25:56.127 "name": "BaseBdev4", 00:25:56.127 "uuid": "f3a8acf5-024b-5d2a-bb00-024a412fce50", 00:25:56.127 "is_configured": true, 00:25:56.127 "data_offset": 0, 00:25:56.127 "data_size": 65536 00:25:56.127 } 00:25:56.127 ] 00:25:56.127 }' 00:25:56.127 18:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:56.127 18:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:56.127 18:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:56.127 18:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:56.127 18:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:56.127 18:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:56.127 18:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:56.127 18:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:56.127 18:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:56.127 18:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:56.127 18:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:56.127 18:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:56.127 18:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:56.127 18:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:56.127 18:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:56.127 18:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:56.385 18:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:56.385 "name": "raid_bdev1", 00:25:56.385 "uuid": "a090b1a8-ff96-44c7-96e3-8c993650c73e", 00:25:56.385 "strip_size_kb": 0, 00:25:56.385 "state": "online", 00:25:56.385 "raid_level": "raid1", 00:25:56.385 "superblock": false, 00:25:56.385 "num_base_bdevs": 4, 00:25:56.385 "num_base_bdevs_discovered": 3, 00:25:56.385 "num_base_bdevs_operational": 3, 00:25:56.385 "base_bdevs_list": [ 00:25:56.385 { 00:25:56.385 "name": "spare", 00:25:56.385 "uuid": "96714195-2cff-58db-adcd-d7d9bf687827", 00:25:56.385 "is_configured": true, 00:25:56.385 "data_offset": 0, 00:25:56.385 "data_size": 65536 00:25:56.385 }, 00:25:56.385 { 00:25:56.385 "name": null, 00:25:56.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:56.385 "is_configured": false, 00:25:56.385 "data_offset": 0, 00:25:56.385 "data_size": 65536 00:25:56.385 }, 00:25:56.385 { 00:25:56.385 "name": "BaseBdev3", 00:25:56.385 "uuid": "9a5a2a5d-d386-586a-b3d3-c155879990ec", 00:25:56.385 "is_configured": true, 00:25:56.385 "data_offset": 0, 00:25:56.385 "data_size": 65536 00:25:56.385 }, 00:25:56.385 { 00:25:56.385 "name": "BaseBdev4", 00:25:56.385 "uuid": "f3a8acf5-024b-5d2a-bb00-024a412fce50", 00:25:56.385 "is_configured": true, 00:25:56.385 "data_offset": 0, 00:25:56.385 "data_size": 65536 00:25:56.385 } 00:25:56.385 ] 00:25:56.385 }' 00:25:56.385 18:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:56.385 18:40:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:56.949 18:40:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:57.207 [2024-07-15 18:40:42.689704] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:57.207 [2024-07-15 18:40:42.689733] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:57.207 00:25:57.207 Latency(us) 00:25:57.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:57.207 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:25:57.207 raid_bdev1 : 11.38 87.03 261.09 0.00 0.00 15327.98 312.08 122833.19 00:25:57.207 =================================================================================================================== 00:25:57.207 Total : 87.03 261.09 0.00 0.00 15327.98 312.08 122833.19 00:25:57.207 [2024-07-15 18:40:42.745971] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:57.207 [2024-07-15 18:40:42.745998] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:57.207 [2024-07-15 18:40:42.746099] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:57.207 [2024-07-15 18:40:42.746109] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e64990 name raid_bdev1, state offline 00:25:57.207 0 00:25:57.464 18:40:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.464 18:40:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:25:57.722 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:25:57.722 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:25:57.722 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:25:57.722 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:25:57.722 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:57.722 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:25:57.722 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:57.722 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:57.722 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:57.722 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:25:57.722 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:57.722 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:57.722 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:25:57.722 /dev/nbd0 00:25:57.979 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:57.979 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:57.979 18:40:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:25:57.979 18:40:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:25:57.979 18:40:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:57.979 18:40:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:57.979 18:40:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:25:57.979 18:40:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:25:57.979 18:40:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:57.979 18:40:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:57.979 18:40:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:57.979 1+0 records in 00:25:57.979 1+0 records out 00:25:57.979 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231834 s, 17.7 MB/s 00:25:57.979 18:40:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:57.979 18:40:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:25:57.979 18:40:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:57.979 18:40:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:57.979 18:40:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:25:57.979 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:57.979 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:57.979 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:25:57.979 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:25:57.980 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # continue 00:25:57.980 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:25:57.980 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:25:57.980 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:25:57.980 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:57.980 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:25:57.980 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:57.980 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:25:57.980 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:57.980 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:25:57.980 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:57.980 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:57.980 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:25:58.237 /dev/nbd1 00:25:58.237 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:58.237 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:58.237 18:40:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:25:58.237 18:40:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:25:58.237 18:40:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:58.237 18:40:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:58.237 18:40:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:25:58.237 18:40:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:25:58.237 18:40:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:58.237 18:40:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:58.237 18:40:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:58.237 1+0 records in 00:25:58.237 1+0 records out 00:25:58.237 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230755 s, 17.8 MB/s 00:25:58.237 18:40:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:58.237 18:40:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:25:58.237 18:40:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:58.237 18:40:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:58.237 18:40:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:25:58.237 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:58.237 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:58.237 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:58.237 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:25:58.237 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:58.237 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:25:58.237 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:58.237 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:25:58.237 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:58.237 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:58.495 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:58.495 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:58.495 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:58.495 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:58.495 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:58.495 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:58.495 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:25:58.495 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:25:58.495 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:25:58.495 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:25:58.495 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:25:58.495 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:58.495 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:25:58.495 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:58.495 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:25:58.495 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:58.496 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:25:58.496 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:58.496 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:58.496 18:40:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:25:58.753 /dev/nbd1 00:25:58.753 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:58.753 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:58.753 18:40:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:25:58.753 18:40:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:25:58.753 18:40:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:58.753 18:40:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:58.753 18:40:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:25:58.753 18:40:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:25:58.753 18:40:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:58.753 18:40:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:58.753 18:40:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:58.754 1+0 records in 00:25:58.754 1+0 records out 00:25:58.754 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207614 s, 19.7 MB/s 00:25:58.754 18:40:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:58.754 18:40:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:25:58.754 18:40:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:58.754 18:40:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:58.754 18:40:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:25:58.754 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:58.754 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:58.754 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:58.754 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:25:58.754 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:58.754 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:25:58.754 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:58.754 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:25:58.754 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:58.754 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:59.011 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:59.011 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:59.011 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:59.011 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:59.011 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:59.011 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:59.269 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:25:59.269 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:25:59.269 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:59.269 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:59.269 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:59.269 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:59.269 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:25:59.269 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:59.269 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:59.528 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:59.528 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:59.528 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:59.528 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:59.528 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:59.528 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:59.528 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:25:59.528 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:25:59.528 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:25:59.528 18:40:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 2918689 00:25:59.529 18:40:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@948 -- # '[' -z 2918689 ']' 00:25:59.529 18:40:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # kill -0 2918689 00:25:59.529 18:40:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # uname 00:25:59.529 18:40:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:59.529 18:40:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2918689 00:25:59.529 18:40:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:59.529 18:40:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:59.529 18:40:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2918689' 00:25:59.529 killing process with pid 2918689 00:25:59.529 18:40:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@967 -- # kill 2918689 00:25:59.529 Received shutdown signal, test time was about 13.577409 seconds 00:25:59.529 00:25:59.529 Latency(us) 00:25:59.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.529 =================================================================================================================== 00:25:59.529 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:59.529 [2024-07-15 18:40:44.949101] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:59.529 18:40:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # wait 2918689 00:25:59.529 [2024-07-15 18:40:44.988979] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:59.787 18:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:25:59.787 00:25:59.787 real 0m20.760s 00:25:59.787 user 0m33.737s 00:25:59.787 sys 0m2.873s 00:25:59.787 18:40:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:59.787 18:40:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:59.787 ************************************ 00:25:59.787 END TEST raid_rebuild_test_io 00:25:59.787 ************************************ 00:25:59.787 18:40:45 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:25:59.787 18:40:45 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:25:59.787 18:40:45 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:25:59.787 18:40:45 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:59.787 18:40:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:59.787 ************************************ 00:25:59.787 START TEST raid_rebuild_test_sb_io 00:25:59.787 ************************************ 00:25:59.787 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 true true true 00:25:59.787 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:25:59.787 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:25:59.787 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:25:59.787 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:25:59.787 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:25:59.787 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:25:59.787 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:59.787 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:25:59.787 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:59.787 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:59.787 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:25:59.787 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:59.787 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:59.787 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:25:59.787 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:59.787 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:59.787 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:25:59.787 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:59.787 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:59.787 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:59.787 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:25:59.787 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:25:59.787 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:25:59.787 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:25:59.787 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:25:59.787 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:25:59.787 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:25:59.788 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:25:59.788 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:25:59.788 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:25:59.788 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=2922121 00:25:59.788 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:59.788 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 2922121 /var/tmp/spdk-raid.sock 00:25:59.788 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@829 -- # '[' -z 2922121 ']' 00:25:59.788 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:59.788 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:59.788 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:59.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:59.788 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:59.788 18:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:59.788 [2024-07-15 18:40:45.301697] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:25:59.788 [2024-07-15 18:40:45.301768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2922121 ] 00:25:59.788 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:59.788 Zero copy mechanism will not be used. 00:26:00.047 [2024-07-15 18:40:45.408633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.047 [2024-07-15 18:40:45.505333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.047 [2024-07-15 18:40:45.565289] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:00.047 [2024-07-15 18:40:45.565325] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:00.983 18:40:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:00.983 18:40:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # return 0 00:26:00.983 18:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:00.983 18:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:00.983 BaseBdev1_malloc 00:26:00.983 18:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:01.242 [2024-07-15 18:40:46.755048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:01.242 [2024-07-15 18:40:46.755092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:01.242 [2024-07-15 18:40:46.755113] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1898130 00:26:01.242 [2024-07-15 18:40:46.755122] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:01.242 [2024-07-15 18:40:46.756829] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:01.242 [2024-07-15 18:40:46.756857] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:01.242 BaseBdev1 00:26:01.242 18:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:01.242 18:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:01.500 BaseBdev2_malloc 00:26:01.500 18:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:01.759 [2024-07-15 18:40:47.260872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:01.759 [2024-07-15 18:40:47.260913] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:01.759 [2024-07-15 18:40:47.260929] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a3dfa0 00:26:01.759 [2024-07-15 18:40:47.260938] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:01.759 [2024-07-15 18:40:47.262514] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:01.759 [2024-07-15 18:40:47.262540] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:01.759 BaseBdev2 00:26:01.759 18:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:01.759 18:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:02.017 BaseBdev3_malloc 00:26:02.017 18:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:26:02.275 [2024-07-15 18:40:47.774684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:26:02.275 [2024-07-15 18:40:47.774728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:02.275 [2024-07-15 18:40:47.774745] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a49970 00:26:02.275 [2024-07-15 18:40:47.774754] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:02.275 [2024-07-15 18:40:47.776338] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:02.275 [2024-07-15 18:40:47.776364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:02.275 BaseBdev3 00:26:02.275 18:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:02.275 18:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:02.533 BaseBdev4_malloc 00:26:02.533 18:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:26:02.791 [2024-07-15 18:40:48.288600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:26:02.791 [2024-07-15 18:40:48.288645] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:02.791 [2024-07-15 18:40:48.288665] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a408c0 00:26:02.791 [2024-07-15 18:40:48.288674] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:02.791 [2024-07-15 18:40:48.290252] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:02.791 [2024-07-15 18:40:48.290278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:02.791 BaseBdev4 00:26:02.791 18:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:26:03.050 spare_malloc 00:26:03.050 18:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:03.308 spare_delay 00:26:03.308 18:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:03.566 [2024-07-15 18:40:49.059170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:03.566 [2024-07-15 18:40:49.059212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:03.566 [2024-07-15 18:40:49.059228] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1890bf0 00:26:03.566 [2024-07-15 18:40:49.059237] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:03.566 [2024-07-15 18:40:49.060857] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:03.566 [2024-07-15 18:40:49.060882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:03.566 spare 00:26:03.566 18:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:26:03.825 [2024-07-15 18:40:49.311874] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:03.825 [2024-07-15 18:40:49.313239] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:03.825 [2024-07-15 18:40:49.313295] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:03.825 [2024-07-15 18:40:49.313340] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:03.825 [2024-07-15 18:40:49.313534] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1892990 00:26:03.825 [2024-07-15 18:40:49.313545] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:03.825 [2024-07-15 18:40:49.313753] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a3d3b0 00:26:03.825 [2024-07-15 18:40:49.313908] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1892990 00:26:03.825 [2024-07-15 18:40:49.313917] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1892990 00:26:03.825 [2024-07-15 18:40:49.314023] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:03.825 18:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:03.825 18:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:03.825 18:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:03.825 18:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:03.825 18:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:03.825 18:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:03.825 18:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:03.825 18:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:03.825 18:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:03.825 18:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:03.825 18:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.825 18:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:04.084 18:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:04.085 "name": "raid_bdev1", 00:26:04.085 "uuid": "41d8e06f-e784-49a3-bf07-a07ff9491823", 00:26:04.085 "strip_size_kb": 0, 00:26:04.085 "state": "online", 00:26:04.085 "raid_level": "raid1", 00:26:04.085 "superblock": true, 00:26:04.085 "num_base_bdevs": 4, 00:26:04.085 "num_base_bdevs_discovered": 4, 00:26:04.085 "num_base_bdevs_operational": 4, 00:26:04.085 "base_bdevs_list": [ 00:26:04.085 { 00:26:04.085 "name": "BaseBdev1", 00:26:04.085 "uuid": "419b5781-2853-5149-8a0f-351d6903f9ac", 00:26:04.085 "is_configured": true, 00:26:04.085 "data_offset": 2048, 00:26:04.085 "data_size": 63488 00:26:04.085 }, 00:26:04.085 { 00:26:04.085 "name": "BaseBdev2", 00:26:04.085 "uuid": "635d13ba-4888-5ed5-9d1f-b9471bed2c19", 00:26:04.085 "is_configured": true, 00:26:04.085 "data_offset": 2048, 00:26:04.085 "data_size": 63488 00:26:04.085 }, 00:26:04.085 { 00:26:04.085 "name": "BaseBdev3", 00:26:04.085 "uuid": "07ed41dc-1ba3-5bc5-ab55-3e60dc03af62", 00:26:04.085 "is_configured": true, 00:26:04.085 "data_offset": 2048, 00:26:04.085 "data_size": 63488 00:26:04.085 }, 00:26:04.085 { 00:26:04.085 "name": "BaseBdev4", 00:26:04.085 "uuid": "47d7cb99-13e7-512b-9dd5-928cd14f0ecd", 00:26:04.085 "is_configured": true, 00:26:04.085 "data_offset": 2048, 00:26:04.085 "data_size": 63488 00:26:04.085 } 00:26:04.085 ] 00:26:04.085 }' 00:26:04.085 18:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:04.085 18:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:05.020 18:40:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:05.020 18:40:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:26:05.020 [2024-07-15 18:40:50.451248] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:05.020 18:40:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:26:05.020 18:40:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.020 18:40:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:05.279 18:40:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:26:05.279 18:40:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:26:05.279 18:40:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:05.279 18:40:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:26:05.538 [2024-07-15 18:40:50.849987] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a3cdf0 00:26:05.538 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:05.538 Zero copy mechanism will not be used. 00:26:05.538 Running I/O for 60 seconds... 00:26:05.538 [2024-07-15 18:40:50.975542] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:05.538 [2024-07-15 18:40:50.975775] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1a3cdf0 00:26:05.538 18:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:05.538 18:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:05.538 18:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:05.538 18:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:05.538 18:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:05.538 18:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:05.538 18:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:05.538 18:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:05.538 18:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:05.538 18:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:05.538 18:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.538 18:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:05.797 18:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:05.797 "name": "raid_bdev1", 00:26:05.797 "uuid": "41d8e06f-e784-49a3-bf07-a07ff9491823", 00:26:05.798 "strip_size_kb": 0, 00:26:05.798 "state": "online", 00:26:05.798 "raid_level": "raid1", 00:26:05.798 "superblock": true, 00:26:05.798 "num_base_bdevs": 4, 00:26:05.798 "num_base_bdevs_discovered": 3, 00:26:05.798 "num_base_bdevs_operational": 3, 00:26:05.798 "base_bdevs_list": [ 00:26:05.798 { 00:26:05.798 "name": null, 00:26:05.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:05.798 "is_configured": false, 00:26:05.798 "data_offset": 2048, 00:26:05.798 "data_size": 63488 00:26:05.798 }, 00:26:05.798 { 00:26:05.798 "name": "BaseBdev2", 00:26:05.798 "uuid": "635d13ba-4888-5ed5-9d1f-b9471bed2c19", 00:26:05.798 "is_configured": true, 00:26:05.798 "data_offset": 2048, 00:26:05.798 "data_size": 63488 00:26:05.798 }, 00:26:05.798 { 00:26:05.798 "name": "BaseBdev3", 00:26:05.798 "uuid": "07ed41dc-1ba3-5bc5-ab55-3e60dc03af62", 00:26:05.798 "is_configured": true, 00:26:05.798 "data_offset": 2048, 00:26:05.798 "data_size": 63488 00:26:05.798 }, 00:26:05.798 { 00:26:05.798 "name": "BaseBdev4", 00:26:05.798 "uuid": "47d7cb99-13e7-512b-9dd5-928cd14f0ecd", 00:26:05.798 "is_configured": true, 00:26:05.798 "data_offset": 2048, 00:26:05.798 "data_size": 63488 00:26:05.798 } 00:26:05.798 ] 00:26:05.798 }' 00:26:05.798 18:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:05.798 18:40:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:06.735 18:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:06.735 [2024-07-15 18:40:52.198211] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:06.735 18:40:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:26:06.735 [2024-07-15 18:40:52.277319] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a3d060 00:26:06.735 [2024-07-15 18:40:52.279505] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:07.303 [2024-07-15 18:40:52.581797] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:26:07.303 [2024-07-15 18:40:52.582405] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:26:07.869 18:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:07.869 18:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:07.869 18:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:07.869 18:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:07.869 18:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:07.869 18:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:07.869 18:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:07.869 [2024-07-15 18:40:53.373237] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:26:07.869 [2024-07-15 18:40:53.373501] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:26:08.128 18:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:08.128 "name": "raid_bdev1", 00:26:08.128 "uuid": "41d8e06f-e784-49a3-bf07-a07ff9491823", 00:26:08.128 "strip_size_kb": 0, 00:26:08.128 "state": "online", 00:26:08.128 "raid_level": "raid1", 00:26:08.128 "superblock": true, 00:26:08.128 "num_base_bdevs": 4, 00:26:08.128 "num_base_bdevs_discovered": 4, 00:26:08.128 "num_base_bdevs_operational": 4, 00:26:08.128 "process": { 00:26:08.128 "type": "rebuild", 00:26:08.128 "target": "spare", 00:26:08.128 "progress": { 00:26:08.128 "blocks": 16384, 00:26:08.128 "percent": 25 00:26:08.128 } 00:26:08.128 }, 00:26:08.128 "base_bdevs_list": [ 00:26:08.128 { 00:26:08.128 "name": "spare", 00:26:08.128 "uuid": "6c05ac77-c9f4-5701-a2ed-576a494bb089", 00:26:08.128 "is_configured": true, 00:26:08.128 "data_offset": 2048, 00:26:08.128 "data_size": 63488 00:26:08.128 }, 00:26:08.128 { 00:26:08.128 "name": "BaseBdev2", 00:26:08.128 "uuid": "635d13ba-4888-5ed5-9d1f-b9471bed2c19", 00:26:08.128 "is_configured": true, 00:26:08.128 "data_offset": 2048, 00:26:08.128 "data_size": 63488 00:26:08.128 }, 00:26:08.128 { 00:26:08.128 "name": "BaseBdev3", 00:26:08.128 "uuid": "07ed41dc-1ba3-5bc5-ab55-3e60dc03af62", 00:26:08.128 "is_configured": true, 00:26:08.128 "data_offset": 2048, 00:26:08.128 "data_size": 63488 00:26:08.128 }, 00:26:08.128 { 00:26:08.128 "name": "BaseBdev4", 00:26:08.128 "uuid": "47d7cb99-13e7-512b-9dd5-928cd14f0ecd", 00:26:08.128 "is_configured": true, 00:26:08.128 "data_offset": 2048, 00:26:08.128 "data_size": 63488 00:26:08.128 } 00:26:08.128 ] 00:26:08.128 }' 00:26:08.128 18:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:08.128 18:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:08.128 18:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:08.128 18:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:08.128 18:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:08.128 [2024-07-15 18:40:53.650797] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:26:08.387 [2024-07-15 18:40:53.850352] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:08.387 [2024-07-15 18:40:53.866318] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:26:08.387 [2024-07-15 18:40:53.866502] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:26:08.387 [2024-07-15 18:40:53.867504] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:08.387 [2024-07-15 18:40:53.889089] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:08.387 [2024-07-15 18:40:53.889119] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:08.387 [2024-07-15 18:40:53.889127] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:08.387 [2024-07-15 18:40:53.904714] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1a3cdf0 00:26:08.646 18:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:08.646 18:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:08.646 18:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:08.646 18:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:08.646 18:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:08.646 18:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:08.646 18:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:08.646 18:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:08.646 18:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:08.646 18:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:08.646 18:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:08.646 18:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:08.905 18:40:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:08.905 "name": "raid_bdev1", 00:26:08.905 "uuid": "41d8e06f-e784-49a3-bf07-a07ff9491823", 00:26:08.905 "strip_size_kb": 0, 00:26:08.905 "state": "online", 00:26:08.905 "raid_level": "raid1", 00:26:08.905 "superblock": true, 00:26:08.905 "num_base_bdevs": 4, 00:26:08.905 "num_base_bdevs_discovered": 3, 00:26:08.905 "num_base_bdevs_operational": 3, 00:26:08.905 "base_bdevs_list": [ 00:26:08.905 { 00:26:08.905 "name": null, 00:26:08.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:08.905 "is_configured": false, 00:26:08.905 "data_offset": 2048, 00:26:08.905 "data_size": 63488 00:26:08.905 }, 00:26:08.905 { 00:26:08.905 "name": "BaseBdev2", 00:26:08.905 "uuid": "635d13ba-4888-5ed5-9d1f-b9471bed2c19", 00:26:08.905 "is_configured": true, 00:26:08.905 "data_offset": 2048, 00:26:08.905 "data_size": 63488 00:26:08.905 }, 00:26:08.905 { 00:26:08.905 "name": "BaseBdev3", 00:26:08.905 "uuid": "07ed41dc-1ba3-5bc5-ab55-3e60dc03af62", 00:26:08.905 "is_configured": true, 00:26:08.905 "data_offset": 2048, 00:26:08.905 "data_size": 63488 00:26:08.905 }, 00:26:08.905 { 00:26:08.905 "name": "BaseBdev4", 00:26:08.905 "uuid": "47d7cb99-13e7-512b-9dd5-928cd14f0ecd", 00:26:08.905 "is_configured": true, 00:26:08.905 "data_offset": 2048, 00:26:08.905 "data_size": 63488 00:26:08.905 } 00:26:08.905 ] 00:26:08.905 }' 00:26:08.905 18:40:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:08.905 18:40:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:09.505 18:40:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:09.505 18:40:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:09.505 18:40:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:09.505 18:40:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:09.505 18:40:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:09.505 18:40:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:09.505 18:40:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:09.804 18:40:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:09.804 "name": "raid_bdev1", 00:26:09.804 "uuid": "41d8e06f-e784-49a3-bf07-a07ff9491823", 00:26:09.804 "strip_size_kb": 0, 00:26:09.804 "state": "online", 00:26:09.804 "raid_level": "raid1", 00:26:09.804 "superblock": true, 00:26:09.804 "num_base_bdevs": 4, 00:26:09.804 "num_base_bdevs_discovered": 3, 00:26:09.804 "num_base_bdevs_operational": 3, 00:26:09.804 "base_bdevs_list": [ 00:26:09.804 { 00:26:09.804 "name": null, 00:26:09.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:09.804 "is_configured": false, 00:26:09.804 "data_offset": 2048, 00:26:09.804 "data_size": 63488 00:26:09.804 }, 00:26:09.804 { 00:26:09.804 "name": "BaseBdev2", 00:26:09.804 "uuid": "635d13ba-4888-5ed5-9d1f-b9471bed2c19", 00:26:09.804 "is_configured": true, 00:26:09.804 "data_offset": 2048, 00:26:09.804 "data_size": 63488 00:26:09.804 }, 00:26:09.804 { 00:26:09.804 "name": "BaseBdev3", 00:26:09.804 "uuid": "07ed41dc-1ba3-5bc5-ab55-3e60dc03af62", 00:26:09.804 "is_configured": true, 00:26:09.804 "data_offset": 2048, 00:26:09.804 "data_size": 63488 00:26:09.804 }, 00:26:09.804 { 00:26:09.804 "name": "BaseBdev4", 00:26:09.804 "uuid": "47d7cb99-13e7-512b-9dd5-928cd14f0ecd", 00:26:09.804 "is_configured": true, 00:26:09.804 "data_offset": 2048, 00:26:09.804 "data_size": 63488 00:26:09.804 } 00:26:09.804 ] 00:26:09.804 }' 00:26:09.804 18:40:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:09.804 18:40:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:09.804 18:40:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:09.804 18:40:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:09.804 18:40:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:10.371 [2024-07-15 18:40:55.704710] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:10.371 18:40:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:10.371 [2024-07-15 18:40:55.822481] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x159eb40 00:26:10.371 [2024-07-15 18:40:55.824036] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:10.630 [2024-07-15 18:40:55.955388] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:26:10.630 [2024-07-15 18:40:55.955736] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:26:10.630 [2024-07-15 18:40:56.132539] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:26:10.630 [2024-07-15 18:40:56.133173] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:26:11.198 [2024-07-15 18:40:56.514296] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:26:11.198 [2024-07-15 18:40:56.647044] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:26:11.198 [2024-07-15 18:40:56.647286] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:26:11.457 18:40:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:11.457 18:40:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:11.457 18:40:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:11.457 18:40:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:11.457 18:40:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:11.457 18:40:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:11.457 18:40:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:11.457 [2024-07-15 18:40:56.983739] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:26:11.716 18:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:11.716 "name": "raid_bdev1", 00:26:11.716 "uuid": "41d8e06f-e784-49a3-bf07-a07ff9491823", 00:26:11.716 "strip_size_kb": 0, 00:26:11.716 "state": "online", 00:26:11.716 "raid_level": "raid1", 00:26:11.716 "superblock": true, 00:26:11.716 "num_base_bdevs": 4, 00:26:11.716 "num_base_bdevs_discovered": 4, 00:26:11.716 "num_base_bdevs_operational": 4, 00:26:11.716 "process": { 00:26:11.716 "type": "rebuild", 00:26:11.716 "target": "spare", 00:26:11.716 "progress": { 00:26:11.716 "blocks": 14336, 00:26:11.716 "percent": 22 00:26:11.716 } 00:26:11.716 }, 00:26:11.716 "base_bdevs_list": [ 00:26:11.716 { 00:26:11.716 "name": "spare", 00:26:11.716 "uuid": "6c05ac77-c9f4-5701-a2ed-576a494bb089", 00:26:11.716 "is_configured": true, 00:26:11.716 "data_offset": 2048, 00:26:11.716 "data_size": 63488 00:26:11.716 }, 00:26:11.716 { 00:26:11.716 "name": "BaseBdev2", 00:26:11.716 "uuid": "635d13ba-4888-5ed5-9d1f-b9471bed2c19", 00:26:11.716 "is_configured": true, 00:26:11.716 "data_offset": 2048, 00:26:11.716 "data_size": 63488 00:26:11.716 }, 00:26:11.716 { 00:26:11.716 "name": "BaseBdev3", 00:26:11.716 "uuid": "07ed41dc-1ba3-5bc5-ab55-3e60dc03af62", 00:26:11.716 "is_configured": true, 00:26:11.716 "data_offset": 2048, 00:26:11.716 "data_size": 63488 00:26:11.716 }, 00:26:11.716 { 00:26:11.716 "name": "BaseBdev4", 00:26:11.716 "uuid": "47d7cb99-13e7-512b-9dd5-928cd14f0ecd", 00:26:11.716 "is_configured": true, 00:26:11.716 "data_offset": 2048, 00:26:11.716 "data_size": 63488 00:26:11.716 } 00:26:11.716 ] 00:26:11.716 }' 00:26:11.716 18:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:11.716 18:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:11.716 18:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:11.716 18:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:11.716 18:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:26:11.716 18:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:26:11.716 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:26:11.716 18:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:26:11.716 18:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:26:11.716 18:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:26:11.716 18:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:26:11.716 [2024-07-15 18:40:57.226475] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:26:12.284 [2024-07-15 18:40:57.616159] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:26:12.284 [2024-07-15 18:40:57.637787] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:12.284 [2024-07-15 18:40:57.749290] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:26:12.542 [2024-07-15 18:40:57.901382] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x1a3cdf0 00:26:12.542 [2024-07-15 18:40:57.901407] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x159eb40 00:26:12.542 18:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:26:12.542 18:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:26:12.542 18:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:12.542 18:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:12.542 18:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:12.542 18:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:12.542 18:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:12.542 18:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:12.542 18:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:12.801 18:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:12.801 "name": "raid_bdev1", 00:26:12.801 "uuid": "41d8e06f-e784-49a3-bf07-a07ff9491823", 00:26:12.801 "strip_size_kb": 0, 00:26:12.801 "state": "online", 00:26:12.801 "raid_level": "raid1", 00:26:12.801 "superblock": true, 00:26:12.801 "num_base_bdevs": 4, 00:26:12.801 "num_base_bdevs_discovered": 3, 00:26:12.801 "num_base_bdevs_operational": 3, 00:26:12.801 "process": { 00:26:12.801 "type": "rebuild", 00:26:12.801 "target": "spare", 00:26:12.801 "progress": { 00:26:12.801 "blocks": 24576, 00:26:12.801 "percent": 38 00:26:12.801 } 00:26:12.801 }, 00:26:12.801 "base_bdevs_list": [ 00:26:12.801 { 00:26:12.801 "name": "spare", 00:26:12.801 "uuid": "6c05ac77-c9f4-5701-a2ed-576a494bb089", 00:26:12.801 "is_configured": true, 00:26:12.801 "data_offset": 2048, 00:26:12.801 "data_size": 63488 00:26:12.801 }, 00:26:12.801 { 00:26:12.801 "name": null, 00:26:12.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:12.801 "is_configured": false, 00:26:12.801 "data_offset": 2048, 00:26:12.801 "data_size": 63488 00:26:12.801 }, 00:26:12.801 { 00:26:12.801 "name": "BaseBdev3", 00:26:12.801 "uuid": "07ed41dc-1ba3-5bc5-ab55-3e60dc03af62", 00:26:12.801 "is_configured": true, 00:26:12.801 "data_offset": 2048, 00:26:12.801 "data_size": 63488 00:26:12.801 }, 00:26:12.801 { 00:26:12.801 "name": "BaseBdev4", 00:26:12.801 "uuid": "47d7cb99-13e7-512b-9dd5-928cd14f0ecd", 00:26:12.801 "is_configured": true, 00:26:12.801 "data_offset": 2048, 00:26:12.801 "data_size": 63488 00:26:12.801 } 00:26:12.801 ] 00:26:12.801 }' 00:26:12.801 18:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:12.801 18:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:12.801 18:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:12.801 18:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:12.801 18:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=1017 00:26:12.801 18:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:12.801 18:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:12.801 18:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:12.801 18:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:12.801 18:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:12.801 18:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:12.801 18:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:12.801 18:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:13.060 18:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:13.060 "name": "raid_bdev1", 00:26:13.060 "uuid": "41d8e06f-e784-49a3-bf07-a07ff9491823", 00:26:13.060 "strip_size_kb": 0, 00:26:13.060 "state": "online", 00:26:13.060 "raid_level": "raid1", 00:26:13.060 "superblock": true, 00:26:13.060 "num_base_bdevs": 4, 00:26:13.060 "num_base_bdevs_discovered": 3, 00:26:13.060 "num_base_bdevs_operational": 3, 00:26:13.060 "process": { 00:26:13.060 "type": "rebuild", 00:26:13.060 "target": "spare", 00:26:13.060 "progress": { 00:26:13.060 "blocks": 30720, 00:26:13.060 "percent": 48 00:26:13.060 } 00:26:13.060 }, 00:26:13.060 "base_bdevs_list": [ 00:26:13.060 { 00:26:13.060 "name": "spare", 00:26:13.060 "uuid": "6c05ac77-c9f4-5701-a2ed-576a494bb089", 00:26:13.060 "is_configured": true, 00:26:13.060 "data_offset": 2048, 00:26:13.060 "data_size": 63488 00:26:13.060 }, 00:26:13.060 { 00:26:13.060 "name": null, 00:26:13.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.060 "is_configured": false, 00:26:13.060 "data_offset": 2048, 00:26:13.060 "data_size": 63488 00:26:13.060 }, 00:26:13.060 { 00:26:13.060 "name": "BaseBdev3", 00:26:13.060 "uuid": "07ed41dc-1ba3-5bc5-ab55-3e60dc03af62", 00:26:13.060 "is_configured": true, 00:26:13.060 "data_offset": 2048, 00:26:13.060 "data_size": 63488 00:26:13.060 }, 00:26:13.060 { 00:26:13.060 "name": "BaseBdev4", 00:26:13.060 "uuid": "47d7cb99-13e7-512b-9dd5-928cd14f0ecd", 00:26:13.060 "is_configured": true, 00:26:13.060 "data_offset": 2048, 00:26:13.060 "data_size": 63488 00:26:13.060 } 00:26:13.060 ] 00:26:13.060 }' 00:26:13.060 18:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:13.060 [2024-07-15 18:40:58.591851] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:26:13.318 18:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:13.318 18:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:13.318 18:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:13.318 18:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:26:13.886 [2024-07-15 18:40:59.320361] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:26:14.144 [2024-07-15 18:40:59.450719] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:26:14.144 18:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:14.144 18:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:14.144 18:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:14.144 18:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:14.144 18:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:14.144 18:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:14.144 18:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:14.144 18:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:14.402 [2024-07-15 18:40:59.772305] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:26:14.402 18:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:14.402 "name": "raid_bdev1", 00:26:14.402 "uuid": "41d8e06f-e784-49a3-bf07-a07ff9491823", 00:26:14.402 "strip_size_kb": 0, 00:26:14.402 "state": "online", 00:26:14.402 "raid_level": "raid1", 00:26:14.402 "superblock": true, 00:26:14.402 "num_base_bdevs": 4, 00:26:14.402 "num_base_bdevs_discovered": 3, 00:26:14.402 "num_base_bdevs_operational": 3, 00:26:14.402 "process": { 00:26:14.402 "type": "rebuild", 00:26:14.402 "target": "spare", 00:26:14.402 "progress": { 00:26:14.402 "blocks": 51200, 00:26:14.402 "percent": 80 00:26:14.402 } 00:26:14.402 }, 00:26:14.402 "base_bdevs_list": [ 00:26:14.402 { 00:26:14.402 "name": "spare", 00:26:14.402 "uuid": "6c05ac77-c9f4-5701-a2ed-576a494bb089", 00:26:14.402 "is_configured": true, 00:26:14.402 "data_offset": 2048, 00:26:14.402 "data_size": 63488 00:26:14.402 }, 00:26:14.402 { 00:26:14.402 "name": null, 00:26:14.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:14.402 "is_configured": false, 00:26:14.402 "data_offset": 2048, 00:26:14.402 "data_size": 63488 00:26:14.402 }, 00:26:14.402 { 00:26:14.402 "name": "BaseBdev3", 00:26:14.402 "uuid": "07ed41dc-1ba3-5bc5-ab55-3e60dc03af62", 00:26:14.402 "is_configured": true, 00:26:14.402 "data_offset": 2048, 00:26:14.402 "data_size": 63488 00:26:14.402 }, 00:26:14.402 { 00:26:14.402 "name": "BaseBdev4", 00:26:14.402 "uuid": "47d7cb99-13e7-512b-9dd5-928cd14f0ecd", 00:26:14.402 "is_configured": true, 00:26:14.402 "data_offset": 2048, 00:26:14.402 "data_size": 63488 00:26:14.402 } 00:26:14.402 ] 00:26:14.402 }' 00:26:14.402 18:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:14.659 [2024-07-15 18:40:59.975873] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:26:14.659 [2024-07-15 18:40:59.976108] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:26:14.659 18:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:14.659 18:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:14.659 18:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:14.659 18:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:26:15.224 [2024-07-15 18:41:00.557108] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:15.224 [2024-07-15 18:41:00.666105] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:15.224 [2024-07-15 18:41:00.668220] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:15.791 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:15.791 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:15.791 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:15.791 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:15.791 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:15.791 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:15.791 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.791 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:15.791 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:15.791 "name": "raid_bdev1", 00:26:15.791 "uuid": "41d8e06f-e784-49a3-bf07-a07ff9491823", 00:26:15.791 "strip_size_kb": 0, 00:26:15.791 "state": "online", 00:26:15.791 "raid_level": "raid1", 00:26:15.791 "superblock": true, 00:26:15.791 "num_base_bdevs": 4, 00:26:15.791 "num_base_bdevs_discovered": 3, 00:26:15.791 "num_base_bdevs_operational": 3, 00:26:15.791 "base_bdevs_list": [ 00:26:15.791 { 00:26:15.791 "name": "spare", 00:26:15.791 "uuid": "6c05ac77-c9f4-5701-a2ed-576a494bb089", 00:26:15.791 "is_configured": true, 00:26:15.791 "data_offset": 2048, 00:26:15.791 "data_size": 63488 00:26:15.791 }, 00:26:15.791 { 00:26:15.791 "name": null, 00:26:15.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.791 "is_configured": false, 00:26:15.791 "data_offset": 2048, 00:26:15.791 "data_size": 63488 00:26:15.791 }, 00:26:15.791 { 00:26:15.791 "name": "BaseBdev3", 00:26:15.791 "uuid": "07ed41dc-1ba3-5bc5-ab55-3e60dc03af62", 00:26:15.791 "is_configured": true, 00:26:15.791 "data_offset": 2048, 00:26:15.791 "data_size": 63488 00:26:15.791 }, 00:26:15.791 { 00:26:15.791 "name": "BaseBdev4", 00:26:15.791 "uuid": "47d7cb99-13e7-512b-9dd5-928cd14f0ecd", 00:26:15.791 "is_configured": true, 00:26:15.791 "data_offset": 2048, 00:26:15.791 "data_size": 63488 00:26:15.791 } 00:26:15.791 ] 00:26:15.791 }' 00:26:15.791 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:16.050 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:16.050 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:16.050 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:26:16.050 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:26:16.050 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:16.050 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:16.050 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:16.050 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:16.050 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:16.050 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:16.050 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:16.309 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:16.309 "name": "raid_bdev1", 00:26:16.309 "uuid": "41d8e06f-e784-49a3-bf07-a07ff9491823", 00:26:16.309 "strip_size_kb": 0, 00:26:16.309 "state": "online", 00:26:16.309 "raid_level": "raid1", 00:26:16.309 "superblock": true, 00:26:16.309 "num_base_bdevs": 4, 00:26:16.309 "num_base_bdevs_discovered": 3, 00:26:16.309 "num_base_bdevs_operational": 3, 00:26:16.309 "base_bdevs_list": [ 00:26:16.309 { 00:26:16.309 "name": "spare", 00:26:16.309 "uuid": "6c05ac77-c9f4-5701-a2ed-576a494bb089", 00:26:16.309 "is_configured": true, 00:26:16.309 "data_offset": 2048, 00:26:16.309 "data_size": 63488 00:26:16.309 }, 00:26:16.309 { 00:26:16.309 "name": null, 00:26:16.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.309 "is_configured": false, 00:26:16.309 "data_offset": 2048, 00:26:16.309 "data_size": 63488 00:26:16.309 }, 00:26:16.309 { 00:26:16.309 "name": "BaseBdev3", 00:26:16.309 "uuid": "07ed41dc-1ba3-5bc5-ab55-3e60dc03af62", 00:26:16.309 "is_configured": true, 00:26:16.309 "data_offset": 2048, 00:26:16.309 "data_size": 63488 00:26:16.309 }, 00:26:16.309 { 00:26:16.309 "name": "BaseBdev4", 00:26:16.309 "uuid": "47d7cb99-13e7-512b-9dd5-928cd14f0ecd", 00:26:16.309 "is_configured": true, 00:26:16.309 "data_offset": 2048, 00:26:16.309 "data_size": 63488 00:26:16.309 } 00:26:16.309 ] 00:26:16.309 }' 00:26:16.309 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:16.309 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:16.309 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:16.309 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:16.309 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:16.309 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:16.309 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:16.309 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:16.309 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:16.309 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:16.309 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:16.309 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:16.309 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:16.309 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:16.309 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:16.309 18:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:16.568 18:41:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:16.568 "name": "raid_bdev1", 00:26:16.568 "uuid": "41d8e06f-e784-49a3-bf07-a07ff9491823", 00:26:16.568 "strip_size_kb": 0, 00:26:16.568 "state": "online", 00:26:16.568 "raid_level": "raid1", 00:26:16.568 "superblock": true, 00:26:16.568 "num_base_bdevs": 4, 00:26:16.568 "num_base_bdevs_discovered": 3, 00:26:16.568 "num_base_bdevs_operational": 3, 00:26:16.568 "base_bdevs_list": [ 00:26:16.568 { 00:26:16.568 "name": "spare", 00:26:16.568 "uuid": "6c05ac77-c9f4-5701-a2ed-576a494bb089", 00:26:16.568 "is_configured": true, 00:26:16.568 "data_offset": 2048, 00:26:16.568 "data_size": 63488 00:26:16.568 }, 00:26:16.568 { 00:26:16.568 "name": null, 00:26:16.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.568 "is_configured": false, 00:26:16.568 "data_offset": 2048, 00:26:16.568 "data_size": 63488 00:26:16.568 }, 00:26:16.568 { 00:26:16.568 "name": "BaseBdev3", 00:26:16.568 "uuid": "07ed41dc-1ba3-5bc5-ab55-3e60dc03af62", 00:26:16.568 "is_configured": true, 00:26:16.568 "data_offset": 2048, 00:26:16.568 "data_size": 63488 00:26:16.568 }, 00:26:16.568 { 00:26:16.568 "name": "BaseBdev4", 00:26:16.568 "uuid": "47d7cb99-13e7-512b-9dd5-928cd14f0ecd", 00:26:16.568 "is_configured": true, 00:26:16.568 "data_offset": 2048, 00:26:16.568 "data_size": 63488 00:26:16.568 } 00:26:16.568 ] 00:26:16.568 }' 00:26:16.568 18:41:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:16.568 18:41:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:17.135 18:41:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:17.393 [2024-07-15 18:41:02.867732] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:17.393 [2024-07-15 18:41:02.867764] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:17.393 00:26:17.393 Latency(us) 00:26:17.393 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.393 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:26:17.393 raid_bdev1 : 12.01 90.12 270.36 0.00 0.00 14386.29 302.32 123831.83 00:26:17.393 =================================================================================================================== 00:26:17.393 Total : 90.12 270.36 0.00 0.00 14386.29 302.32 123831.83 00:26:17.393 [2024-07-15 18:41:02.891939] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:17.393 [2024-07-15 18:41:02.891982] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:17.393 [2024-07-15 18:41:02.892078] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:17.393 [2024-07-15 18:41:02.892087] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1892990 name raid_bdev1, state offline 00:26:17.393 0 00:26:17.393 18:41:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:17.393 18:41:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:26:17.651 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:26:17.651 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:26:17.651 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:26:17.651 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:26:17.651 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:17.651 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:26:17.651 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:17.651 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:17.651 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:17.651 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:26:17.651 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:17.651 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:17.651 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:26:17.909 /dev/nbd0 00:26:17.909 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:17.909 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:17.909 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:26:17.909 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:26:17.909 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:17.909 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:17.909 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:26:17.909 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:26:17.909 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:17.909 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:17.909 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:17.909 1+0 records in 00:26:17.909 1+0 records out 00:26:17.909 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222962 s, 18.4 MB/s 00:26:17.909 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:18.167 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:26:18.167 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:18.167 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:18.167 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:26:18.167 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:18.167 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:18.167 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:26:18.167 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:26:18.167 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # continue 00:26:18.167 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:26:18.167 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:26:18.167 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:26:18.167 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:18.167 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:26:18.167 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:18.167 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:26:18.167 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:18.167 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:26:18.167 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:18.167 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:18.167 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:26:18.167 /dev/nbd1 00:26:18.425 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:18.425 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:18.425 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:26:18.425 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:26:18.425 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:18.425 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:18.425 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:26:18.425 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:26:18.426 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:18.426 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:18.426 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:18.426 1+0 records in 00:26:18.426 1+0 records out 00:26:18.426 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224469 s, 18.2 MB/s 00:26:18.426 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:18.426 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:26:18.426 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:18.426 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:18.426 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:26:18.426 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:18.426 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:18.426 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:18.426 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:26:18.426 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:18.426 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:26:18.426 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:18.426 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:26:18.426 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:18.426 18:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:18.683 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:18.683 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:18.683 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:18.683 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:18.683 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:18.683 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:18.684 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:26:18.684 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:26:18.684 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:26:18.684 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:26:18.684 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:26:18.684 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:18.684 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:26:18.684 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:18.684 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:26:18.684 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:18.684 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:26:18.684 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:18.684 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:18.684 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:26:18.941 /dev/nbd1 00:26:18.941 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:18.941 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:18.941 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:26:18.941 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:26:18.941 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:18.941 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:18.941 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:26:18.941 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:26:18.941 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:18.941 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:18.941 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:18.941 1+0 records in 00:26:18.941 1+0 records out 00:26:18.941 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233788 s, 17.5 MB/s 00:26:18.941 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:18.941 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:26:18.941 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:18.941 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:18.941 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:26:18.941 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:18.941 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:18.941 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:18.941 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:26:18.941 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:18.941 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:26:18.941 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:18.941 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:26:18.941 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:18.941 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:19.199 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:19.199 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:19.199 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:19.199 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:19.199 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:19.199 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:19.199 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:26:19.199 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:26:19.200 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:19.200 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:19.200 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:19.200 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:19.200 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:26:19.200 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:19.200 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:19.458 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:19.458 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:19.458 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:19.458 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:19.458 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:19.458 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:19.458 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:26:19.458 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:26:19.458 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:26:19.458 18:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:19.715 18:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:19.973 [2024-07-15 18:41:05.482334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:19.973 [2024-07-15 18:41:05.482376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:19.973 [2024-07-15 18:41:05.482399] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1890e20 00:26:19.973 [2024-07-15 18:41:05.482409] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:19.973 [2024-07-15 18:41:05.484185] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:19.973 [2024-07-15 18:41:05.484211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:19.973 [2024-07-15 18:41:05.484287] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:19.973 [2024-07-15 18:41:05.484311] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:19.973 [2024-07-15 18:41:05.484413] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:19.973 [2024-07-15 18:41:05.484491] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:19.973 spare 00:26:19.973 18:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:19.973 18:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:19.973 18:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:19.973 18:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:19.973 18:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:19.973 18:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:19.973 18:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:19.973 18:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:19.973 18:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:19.973 18:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:19.973 18:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:19.973 18:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:20.230 [2024-07-15 18:41:05.584812] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x188f370 00:26:20.230 [2024-07-15 18:41:05.584829] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:20.230 [2024-07-15 18:41:05.585034] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x188f940 00:26:20.230 [2024-07-15 18:41:05.585188] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x188f370 00:26:20.230 [2024-07-15 18:41:05.585196] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x188f370 00:26:20.230 [2024-07-15 18:41:05.585304] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:20.230 18:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:20.230 "name": "raid_bdev1", 00:26:20.230 "uuid": "41d8e06f-e784-49a3-bf07-a07ff9491823", 00:26:20.230 "strip_size_kb": 0, 00:26:20.230 "state": "online", 00:26:20.230 "raid_level": "raid1", 00:26:20.230 "superblock": true, 00:26:20.230 "num_base_bdevs": 4, 00:26:20.230 "num_base_bdevs_discovered": 3, 00:26:20.230 "num_base_bdevs_operational": 3, 00:26:20.230 "base_bdevs_list": [ 00:26:20.230 { 00:26:20.230 "name": "spare", 00:26:20.230 "uuid": "6c05ac77-c9f4-5701-a2ed-576a494bb089", 00:26:20.230 "is_configured": true, 00:26:20.230 "data_offset": 2048, 00:26:20.230 "data_size": 63488 00:26:20.230 }, 00:26:20.230 { 00:26:20.230 "name": null, 00:26:20.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:20.230 "is_configured": false, 00:26:20.230 "data_offset": 2048, 00:26:20.230 "data_size": 63488 00:26:20.230 }, 00:26:20.230 { 00:26:20.230 "name": "BaseBdev3", 00:26:20.230 "uuid": "07ed41dc-1ba3-5bc5-ab55-3e60dc03af62", 00:26:20.230 "is_configured": true, 00:26:20.230 "data_offset": 2048, 00:26:20.230 "data_size": 63488 00:26:20.230 }, 00:26:20.230 { 00:26:20.230 "name": "BaseBdev4", 00:26:20.230 "uuid": "47d7cb99-13e7-512b-9dd5-928cd14f0ecd", 00:26:20.230 "is_configured": true, 00:26:20.230 "data_offset": 2048, 00:26:20.230 "data_size": 63488 00:26:20.230 } 00:26:20.230 ] 00:26:20.231 }' 00:26:20.231 18:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:20.231 18:41:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:21.177 18:41:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:21.177 18:41:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:21.177 18:41:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:21.177 18:41:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:21.177 18:41:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:21.177 18:41:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.177 18:41:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:21.177 18:41:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:21.177 "name": "raid_bdev1", 00:26:21.177 "uuid": "41d8e06f-e784-49a3-bf07-a07ff9491823", 00:26:21.177 "strip_size_kb": 0, 00:26:21.177 "state": "online", 00:26:21.177 "raid_level": "raid1", 00:26:21.177 "superblock": true, 00:26:21.177 "num_base_bdevs": 4, 00:26:21.177 "num_base_bdevs_discovered": 3, 00:26:21.177 "num_base_bdevs_operational": 3, 00:26:21.177 "base_bdevs_list": [ 00:26:21.177 { 00:26:21.177 "name": "spare", 00:26:21.177 "uuid": "6c05ac77-c9f4-5701-a2ed-576a494bb089", 00:26:21.177 "is_configured": true, 00:26:21.177 "data_offset": 2048, 00:26:21.177 "data_size": 63488 00:26:21.177 }, 00:26:21.177 { 00:26:21.177 "name": null, 00:26:21.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:21.177 "is_configured": false, 00:26:21.177 "data_offset": 2048, 00:26:21.177 "data_size": 63488 00:26:21.177 }, 00:26:21.177 { 00:26:21.177 "name": "BaseBdev3", 00:26:21.177 "uuid": "07ed41dc-1ba3-5bc5-ab55-3e60dc03af62", 00:26:21.177 "is_configured": true, 00:26:21.177 "data_offset": 2048, 00:26:21.177 "data_size": 63488 00:26:21.177 }, 00:26:21.177 { 00:26:21.177 "name": "BaseBdev4", 00:26:21.177 "uuid": "47d7cb99-13e7-512b-9dd5-928cd14f0ecd", 00:26:21.177 "is_configured": true, 00:26:21.177 "data_offset": 2048, 00:26:21.177 "data_size": 63488 00:26:21.177 } 00:26:21.177 ] 00:26:21.177 }' 00:26:21.178 18:41:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:21.178 18:41:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:21.178 18:41:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:21.435 18:41:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:21.435 18:41:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.435 18:41:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:26:21.693 18:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:26:21.693 18:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:21.952 [2024-07-15 18:41:07.476321] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:22.210 18:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:22.210 18:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:22.210 18:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:22.210 18:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:22.210 18:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:22.210 18:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:22.210 18:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:22.210 18:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:22.210 18:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:22.210 18:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:22.210 18:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:22.210 18:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:22.469 18:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:22.469 "name": "raid_bdev1", 00:26:22.469 "uuid": "41d8e06f-e784-49a3-bf07-a07ff9491823", 00:26:22.469 "strip_size_kb": 0, 00:26:22.469 "state": "online", 00:26:22.469 "raid_level": "raid1", 00:26:22.469 "superblock": true, 00:26:22.469 "num_base_bdevs": 4, 00:26:22.469 "num_base_bdevs_discovered": 2, 00:26:22.469 "num_base_bdevs_operational": 2, 00:26:22.469 "base_bdevs_list": [ 00:26:22.469 { 00:26:22.469 "name": null, 00:26:22.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:22.469 "is_configured": false, 00:26:22.469 "data_offset": 2048, 00:26:22.469 "data_size": 63488 00:26:22.469 }, 00:26:22.469 { 00:26:22.469 "name": null, 00:26:22.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:22.469 "is_configured": false, 00:26:22.469 "data_offset": 2048, 00:26:22.469 "data_size": 63488 00:26:22.469 }, 00:26:22.469 { 00:26:22.469 "name": "BaseBdev3", 00:26:22.469 "uuid": "07ed41dc-1ba3-5bc5-ab55-3e60dc03af62", 00:26:22.469 "is_configured": true, 00:26:22.469 "data_offset": 2048, 00:26:22.469 "data_size": 63488 00:26:22.469 }, 00:26:22.469 { 00:26:22.469 "name": "BaseBdev4", 00:26:22.469 "uuid": "47d7cb99-13e7-512b-9dd5-928cd14f0ecd", 00:26:22.469 "is_configured": true, 00:26:22.469 "data_offset": 2048, 00:26:22.469 "data_size": 63488 00:26:22.469 } 00:26:22.469 ] 00:26:22.469 }' 00:26:22.469 18:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:22.469 18:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:23.035 18:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:23.294 [2024-07-15 18:41:08.631617] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:23.294 [2024-07-15 18:41:08.631766] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:26:23.294 [2024-07-15 18:41:08.631780] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:23.294 [2024-07-15 18:41:08.631804] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:23.294 [2024-07-15 18:41:08.636118] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x188f650 00:26:23.294 [2024-07-15 18:41:08.638118] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:23.294 18:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:26:24.270 18:41:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:24.270 18:41:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:24.270 18:41:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:24.270 18:41:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:24.270 18:41:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:24.270 18:41:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:24.270 18:41:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:24.529 18:41:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:24.529 "name": "raid_bdev1", 00:26:24.529 "uuid": "41d8e06f-e784-49a3-bf07-a07ff9491823", 00:26:24.529 "strip_size_kb": 0, 00:26:24.529 "state": "online", 00:26:24.529 "raid_level": "raid1", 00:26:24.529 "superblock": true, 00:26:24.529 "num_base_bdevs": 4, 00:26:24.529 "num_base_bdevs_discovered": 3, 00:26:24.529 "num_base_bdevs_operational": 3, 00:26:24.529 "process": { 00:26:24.529 "type": "rebuild", 00:26:24.529 "target": "spare", 00:26:24.529 "progress": { 00:26:24.529 "blocks": 24576, 00:26:24.529 "percent": 38 00:26:24.529 } 00:26:24.529 }, 00:26:24.529 "base_bdevs_list": [ 00:26:24.529 { 00:26:24.529 "name": "spare", 00:26:24.529 "uuid": "6c05ac77-c9f4-5701-a2ed-576a494bb089", 00:26:24.529 "is_configured": true, 00:26:24.529 "data_offset": 2048, 00:26:24.529 "data_size": 63488 00:26:24.529 }, 00:26:24.529 { 00:26:24.529 "name": null, 00:26:24.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:24.529 "is_configured": false, 00:26:24.529 "data_offset": 2048, 00:26:24.529 "data_size": 63488 00:26:24.529 }, 00:26:24.529 { 00:26:24.529 "name": "BaseBdev3", 00:26:24.529 "uuid": "07ed41dc-1ba3-5bc5-ab55-3e60dc03af62", 00:26:24.529 "is_configured": true, 00:26:24.529 "data_offset": 2048, 00:26:24.529 "data_size": 63488 00:26:24.529 }, 00:26:24.529 { 00:26:24.529 "name": "BaseBdev4", 00:26:24.529 "uuid": "47d7cb99-13e7-512b-9dd5-928cd14f0ecd", 00:26:24.529 "is_configured": true, 00:26:24.529 "data_offset": 2048, 00:26:24.529 "data_size": 63488 00:26:24.529 } 00:26:24.529 ] 00:26:24.529 }' 00:26:24.529 18:41:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:24.529 18:41:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:24.529 18:41:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:24.529 18:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:24.529 18:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:24.788 [2024-07-15 18:41:10.254798] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:25.047 [2024-07-15 18:41:10.351037] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:25.047 [2024-07-15 18:41:10.351083] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:25.047 [2024-07-15 18:41:10.351098] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:25.047 [2024-07-15 18:41:10.351104] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:25.047 18:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:25.047 18:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:25.047 18:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:25.047 18:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:25.047 18:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:25.047 18:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:25.047 18:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:25.047 18:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:25.047 18:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:25.047 18:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:25.047 18:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:25.047 18:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:25.306 18:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:25.306 "name": "raid_bdev1", 00:26:25.306 "uuid": "41d8e06f-e784-49a3-bf07-a07ff9491823", 00:26:25.306 "strip_size_kb": 0, 00:26:25.306 "state": "online", 00:26:25.306 "raid_level": "raid1", 00:26:25.306 "superblock": true, 00:26:25.306 "num_base_bdevs": 4, 00:26:25.306 "num_base_bdevs_discovered": 2, 00:26:25.306 "num_base_bdevs_operational": 2, 00:26:25.306 "base_bdevs_list": [ 00:26:25.306 { 00:26:25.306 "name": null, 00:26:25.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:25.306 "is_configured": false, 00:26:25.306 "data_offset": 2048, 00:26:25.306 "data_size": 63488 00:26:25.306 }, 00:26:25.306 { 00:26:25.306 "name": null, 00:26:25.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:25.306 "is_configured": false, 00:26:25.306 "data_offset": 2048, 00:26:25.306 "data_size": 63488 00:26:25.306 }, 00:26:25.306 { 00:26:25.306 "name": "BaseBdev3", 00:26:25.306 "uuid": "07ed41dc-1ba3-5bc5-ab55-3e60dc03af62", 00:26:25.306 "is_configured": true, 00:26:25.306 "data_offset": 2048, 00:26:25.306 "data_size": 63488 00:26:25.306 }, 00:26:25.306 { 00:26:25.306 "name": "BaseBdev4", 00:26:25.306 "uuid": "47d7cb99-13e7-512b-9dd5-928cd14f0ecd", 00:26:25.306 "is_configured": true, 00:26:25.306 "data_offset": 2048, 00:26:25.306 "data_size": 63488 00:26:25.306 } 00:26:25.306 ] 00:26:25.306 }' 00:26:25.306 18:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:25.306 18:41:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:25.873 18:41:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:26.131 [2024-07-15 18:41:11.502501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:26.131 [2024-07-15 18:41:11.502544] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:26.131 [2024-07-15 18:41:11.502563] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x188f010 00:26:26.131 [2024-07-15 18:41:11.502572] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:26.131 [2024-07-15 18:41:11.502937] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:26.131 [2024-07-15 18:41:11.502964] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:26.131 [2024-07-15 18:41:11.503043] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:26.131 [2024-07-15 18:41:11.503054] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:26:26.131 [2024-07-15 18:41:11.503061] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:26.131 [2024-07-15 18:41:11.503076] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:26.131 [2024-07-15 18:41:11.507342] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1892930 00:26:26.131 spare 00:26:26.131 [2024-07-15 18:41:11.508880] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:26.131 18:41:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:26:27.068 18:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:27.068 18:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:27.068 18:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:27.068 18:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:27.068 18:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:27.068 18:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:27.068 18:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:27.327 18:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:27.327 "name": "raid_bdev1", 00:26:27.327 "uuid": "41d8e06f-e784-49a3-bf07-a07ff9491823", 00:26:27.327 "strip_size_kb": 0, 00:26:27.327 "state": "online", 00:26:27.327 "raid_level": "raid1", 00:26:27.327 "superblock": true, 00:26:27.327 "num_base_bdevs": 4, 00:26:27.327 "num_base_bdevs_discovered": 3, 00:26:27.327 "num_base_bdevs_operational": 3, 00:26:27.327 "process": { 00:26:27.327 "type": "rebuild", 00:26:27.327 "target": "spare", 00:26:27.327 "progress": { 00:26:27.327 "blocks": 24576, 00:26:27.327 "percent": 38 00:26:27.327 } 00:26:27.327 }, 00:26:27.327 "base_bdevs_list": [ 00:26:27.327 { 00:26:27.327 "name": "spare", 00:26:27.327 "uuid": "6c05ac77-c9f4-5701-a2ed-576a494bb089", 00:26:27.327 "is_configured": true, 00:26:27.327 "data_offset": 2048, 00:26:27.327 "data_size": 63488 00:26:27.327 }, 00:26:27.327 { 00:26:27.327 "name": null, 00:26:27.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:27.327 "is_configured": false, 00:26:27.327 "data_offset": 2048, 00:26:27.327 "data_size": 63488 00:26:27.327 }, 00:26:27.327 { 00:26:27.327 "name": "BaseBdev3", 00:26:27.327 "uuid": "07ed41dc-1ba3-5bc5-ab55-3e60dc03af62", 00:26:27.327 "is_configured": true, 00:26:27.327 "data_offset": 2048, 00:26:27.327 "data_size": 63488 00:26:27.327 }, 00:26:27.327 { 00:26:27.327 "name": "BaseBdev4", 00:26:27.327 "uuid": "47d7cb99-13e7-512b-9dd5-928cd14f0ecd", 00:26:27.327 "is_configured": true, 00:26:27.327 "data_offset": 2048, 00:26:27.327 "data_size": 63488 00:26:27.327 } 00:26:27.327 ] 00:26:27.327 }' 00:26:27.327 18:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:27.327 18:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:27.327 18:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:27.586 18:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:27.586 18:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:27.586 [2024-07-15 18:41:13.121560] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:27.845 [2024-07-15 18:41:13.221867] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:27.845 [2024-07-15 18:41:13.221914] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:27.845 [2024-07-15 18:41:13.221930] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:27.845 [2024-07-15 18:41:13.221936] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:27.845 18:41:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:27.845 18:41:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:27.845 18:41:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:27.845 18:41:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:27.845 18:41:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:27.845 18:41:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:27.845 18:41:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:27.845 18:41:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:27.845 18:41:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:27.845 18:41:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:27.845 18:41:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:27.845 18:41:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:28.104 18:41:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:28.104 "name": "raid_bdev1", 00:26:28.104 "uuid": "41d8e06f-e784-49a3-bf07-a07ff9491823", 00:26:28.104 "strip_size_kb": 0, 00:26:28.104 "state": "online", 00:26:28.104 "raid_level": "raid1", 00:26:28.104 "superblock": true, 00:26:28.104 "num_base_bdevs": 4, 00:26:28.104 "num_base_bdevs_discovered": 2, 00:26:28.104 "num_base_bdevs_operational": 2, 00:26:28.104 "base_bdevs_list": [ 00:26:28.104 { 00:26:28.104 "name": null, 00:26:28.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:28.104 "is_configured": false, 00:26:28.104 "data_offset": 2048, 00:26:28.104 "data_size": 63488 00:26:28.104 }, 00:26:28.104 { 00:26:28.104 "name": null, 00:26:28.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:28.104 "is_configured": false, 00:26:28.104 "data_offset": 2048, 00:26:28.104 "data_size": 63488 00:26:28.104 }, 00:26:28.104 { 00:26:28.104 "name": "BaseBdev3", 00:26:28.104 "uuid": "07ed41dc-1ba3-5bc5-ab55-3e60dc03af62", 00:26:28.104 "is_configured": true, 00:26:28.104 "data_offset": 2048, 00:26:28.104 "data_size": 63488 00:26:28.104 }, 00:26:28.104 { 00:26:28.104 "name": "BaseBdev4", 00:26:28.104 "uuid": "47d7cb99-13e7-512b-9dd5-928cd14f0ecd", 00:26:28.104 "is_configured": true, 00:26:28.104 "data_offset": 2048, 00:26:28.104 "data_size": 63488 00:26:28.104 } 00:26:28.104 ] 00:26:28.104 }' 00:26:28.104 18:41:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:28.104 18:41:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:28.671 18:41:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:28.671 18:41:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:28.671 18:41:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:28.671 18:41:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:28.671 18:41:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:28.671 18:41:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:28.671 18:41:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:28.930 18:41:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:28.930 "name": "raid_bdev1", 00:26:28.930 "uuid": "41d8e06f-e784-49a3-bf07-a07ff9491823", 00:26:28.930 "strip_size_kb": 0, 00:26:28.930 "state": "online", 00:26:28.930 "raid_level": "raid1", 00:26:28.930 "superblock": true, 00:26:28.930 "num_base_bdevs": 4, 00:26:28.930 "num_base_bdevs_discovered": 2, 00:26:28.930 "num_base_bdevs_operational": 2, 00:26:28.930 "base_bdevs_list": [ 00:26:28.930 { 00:26:28.930 "name": null, 00:26:28.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:28.930 "is_configured": false, 00:26:28.930 "data_offset": 2048, 00:26:28.930 "data_size": 63488 00:26:28.930 }, 00:26:28.930 { 00:26:28.930 "name": null, 00:26:28.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:28.930 "is_configured": false, 00:26:28.930 "data_offset": 2048, 00:26:28.930 "data_size": 63488 00:26:28.930 }, 00:26:28.930 { 00:26:28.930 "name": "BaseBdev3", 00:26:28.930 "uuid": "07ed41dc-1ba3-5bc5-ab55-3e60dc03af62", 00:26:28.930 "is_configured": true, 00:26:28.930 "data_offset": 2048, 00:26:28.930 "data_size": 63488 00:26:28.930 }, 00:26:28.930 { 00:26:28.930 "name": "BaseBdev4", 00:26:28.930 "uuid": "47d7cb99-13e7-512b-9dd5-928cd14f0ecd", 00:26:28.930 "is_configured": true, 00:26:28.930 "data_offset": 2048, 00:26:28.930 "data_size": 63488 00:26:28.930 } 00:26:28.930 ] 00:26:28.930 }' 00:26:28.930 18:41:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:28.930 18:41:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:28.930 18:41:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:29.189 18:41:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:29.189 18:41:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:26:29.448 18:41:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:29.707 [2024-07-15 18:41:15.223914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:29.707 [2024-07-15 18:41:15.223965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:29.707 [2024-07-15 18:41:15.223982] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1898360 00:26:29.707 [2024-07-15 18:41:15.223991] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:29.707 [2024-07-15 18:41:15.224330] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:29.707 [2024-07-15 18:41:15.224345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:29.707 [2024-07-15 18:41:15.224405] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:26:29.707 [2024-07-15 18:41:15.224415] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:26:29.707 [2024-07-15 18:41:15.224422] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:29.707 BaseBdev1 00:26:29.707 18:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:26:31.084 18:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:31.084 18:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:31.084 18:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:31.084 18:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:31.084 18:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:31.084 18:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:31.084 18:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:31.084 18:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:31.084 18:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:31.084 18:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:31.085 18:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:31.085 18:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:31.085 18:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:31.085 "name": "raid_bdev1", 00:26:31.085 "uuid": "41d8e06f-e784-49a3-bf07-a07ff9491823", 00:26:31.085 "strip_size_kb": 0, 00:26:31.085 "state": "online", 00:26:31.085 "raid_level": "raid1", 00:26:31.085 "superblock": true, 00:26:31.085 "num_base_bdevs": 4, 00:26:31.085 "num_base_bdevs_discovered": 2, 00:26:31.085 "num_base_bdevs_operational": 2, 00:26:31.085 "base_bdevs_list": [ 00:26:31.085 { 00:26:31.085 "name": null, 00:26:31.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:31.085 "is_configured": false, 00:26:31.085 "data_offset": 2048, 00:26:31.085 "data_size": 63488 00:26:31.085 }, 00:26:31.085 { 00:26:31.085 "name": null, 00:26:31.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:31.085 "is_configured": false, 00:26:31.085 "data_offset": 2048, 00:26:31.085 "data_size": 63488 00:26:31.085 }, 00:26:31.085 { 00:26:31.085 "name": "BaseBdev3", 00:26:31.085 "uuid": "07ed41dc-1ba3-5bc5-ab55-3e60dc03af62", 00:26:31.085 "is_configured": true, 00:26:31.085 "data_offset": 2048, 00:26:31.085 "data_size": 63488 00:26:31.085 }, 00:26:31.085 { 00:26:31.085 "name": "BaseBdev4", 00:26:31.085 "uuid": "47d7cb99-13e7-512b-9dd5-928cd14f0ecd", 00:26:31.085 "is_configured": true, 00:26:31.085 "data_offset": 2048, 00:26:31.085 "data_size": 63488 00:26:31.085 } 00:26:31.085 ] 00:26:31.085 }' 00:26:31.085 18:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:31.085 18:41:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:31.652 18:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:31.652 18:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:31.652 18:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:31.653 18:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:31.653 18:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:31.653 18:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:31.653 18:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:31.912 18:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:31.912 "name": "raid_bdev1", 00:26:31.912 "uuid": "41d8e06f-e784-49a3-bf07-a07ff9491823", 00:26:31.912 "strip_size_kb": 0, 00:26:31.912 "state": "online", 00:26:31.912 "raid_level": "raid1", 00:26:31.912 "superblock": true, 00:26:31.912 "num_base_bdevs": 4, 00:26:31.912 "num_base_bdevs_discovered": 2, 00:26:31.912 "num_base_bdevs_operational": 2, 00:26:31.912 "base_bdevs_list": [ 00:26:31.912 { 00:26:31.912 "name": null, 00:26:31.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:31.912 "is_configured": false, 00:26:31.912 "data_offset": 2048, 00:26:31.912 "data_size": 63488 00:26:31.912 }, 00:26:31.912 { 00:26:31.912 "name": null, 00:26:31.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:31.912 "is_configured": false, 00:26:31.912 "data_offset": 2048, 00:26:31.912 "data_size": 63488 00:26:31.912 }, 00:26:31.912 { 00:26:31.912 "name": "BaseBdev3", 00:26:31.912 "uuid": "07ed41dc-1ba3-5bc5-ab55-3e60dc03af62", 00:26:31.912 "is_configured": true, 00:26:31.912 "data_offset": 2048, 00:26:31.912 "data_size": 63488 00:26:31.912 }, 00:26:31.912 { 00:26:31.912 "name": "BaseBdev4", 00:26:31.912 "uuid": "47d7cb99-13e7-512b-9dd5-928cd14f0ecd", 00:26:31.912 "is_configured": true, 00:26:31.912 "data_offset": 2048, 00:26:31.912 "data_size": 63488 00:26:31.912 } 00:26:31.912 ] 00:26:31.912 }' 00:26:31.912 18:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:32.171 18:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:32.171 18:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:32.171 18:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:32.171 18:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:32.171 18:41:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:26:32.171 18:41:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:32.171 18:41:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:32.171 18:41:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:32.171 18:41:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:32.171 18:41:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:32.171 18:41:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:32.171 18:41:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:32.171 18:41:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:32.171 18:41:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:26:32.171 18:41:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:32.429 [2024-07-15 18:41:17.755273] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:32.429 [2024-07-15 18:41:17.755388] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:26:32.429 [2024-07-15 18:41:17.755401] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:32.429 request: 00:26:32.429 { 00:26:32.429 "base_bdev": "BaseBdev1", 00:26:32.429 "raid_bdev": "raid_bdev1", 00:26:32.429 "method": "bdev_raid_add_base_bdev", 00:26:32.429 "req_id": 1 00:26:32.429 } 00:26:32.429 Got JSON-RPC error response 00:26:32.429 response: 00:26:32.429 { 00:26:32.429 "code": -22, 00:26:32.429 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:26:32.429 } 00:26:32.429 18:41:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:26:32.429 18:41:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:32.429 18:41:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:32.429 18:41:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:32.429 18:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:26:33.365 18:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:33.365 18:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:33.365 18:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:33.365 18:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:33.365 18:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:33.365 18:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:33.365 18:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:33.365 18:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:33.365 18:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:33.365 18:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:33.365 18:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:33.365 18:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:33.624 18:41:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:33.624 "name": "raid_bdev1", 00:26:33.624 "uuid": "41d8e06f-e784-49a3-bf07-a07ff9491823", 00:26:33.624 "strip_size_kb": 0, 00:26:33.624 "state": "online", 00:26:33.624 "raid_level": "raid1", 00:26:33.624 "superblock": true, 00:26:33.624 "num_base_bdevs": 4, 00:26:33.624 "num_base_bdevs_discovered": 2, 00:26:33.624 "num_base_bdevs_operational": 2, 00:26:33.624 "base_bdevs_list": [ 00:26:33.624 { 00:26:33.624 "name": null, 00:26:33.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:33.624 "is_configured": false, 00:26:33.624 "data_offset": 2048, 00:26:33.624 "data_size": 63488 00:26:33.624 }, 00:26:33.624 { 00:26:33.624 "name": null, 00:26:33.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:33.624 "is_configured": false, 00:26:33.624 "data_offset": 2048, 00:26:33.624 "data_size": 63488 00:26:33.624 }, 00:26:33.624 { 00:26:33.624 "name": "BaseBdev3", 00:26:33.625 "uuid": "07ed41dc-1ba3-5bc5-ab55-3e60dc03af62", 00:26:33.625 "is_configured": true, 00:26:33.625 "data_offset": 2048, 00:26:33.625 "data_size": 63488 00:26:33.625 }, 00:26:33.625 { 00:26:33.625 "name": "BaseBdev4", 00:26:33.625 "uuid": "47d7cb99-13e7-512b-9dd5-928cd14f0ecd", 00:26:33.625 "is_configured": true, 00:26:33.625 "data_offset": 2048, 00:26:33.625 "data_size": 63488 00:26:33.625 } 00:26:33.625 ] 00:26:33.625 }' 00:26:33.625 18:41:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:33.625 18:41:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:34.192 18:41:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:34.192 18:41:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:34.192 18:41:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:34.192 18:41:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:34.192 18:41:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:34.192 18:41:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:34.192 18:41:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.451 18:41:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:34.451 "name": "raid_bdev1", 00:26:34.451 "uuid": "41d8e06f-e784-49a3-bf07-a07ff9491823", 00:26:34.451 "strip_size_kb": 0, 00:26:34.451 "state": "online", 00:26:34.451 "raid_level": "raid1", 00:26:34.451 "superblock": true, 00:26:34.451 "num_base_bdevs": 4, 00:26:34.451 "num_base_bdevs_discovered": 2, 00:26:34.451 "num_base_bdevs_operational": 2, 00:26:34.451 "base_bdevs_list": [ 00:26:34.451 { 00:26:34.451 "name": null, 00:26:34.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:34.451 "is_configured": false, 00:26:34.451 "data_offset": 2048, 00:26:34.451 "data_size": 63488 00:26:34.451 }, 00:26:34.451 { 00:26:34.451 "name": null, 00:26:34.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:34.451 "is_configured": false, 00:26:34.451 "data_offset": 2048, 00:26:34.451 "data_size": 63488 00:26:34.451 }, 00:26:34.451 { 00:26:34.451 "name": "BaseBdev3", 00:26:34.451 "uuid": "07ed41dc-1ba3-5bc5-ab55-3e60dc03af62", 00:26:34.451 "is_configured": true, 00:26:34.451 "data_offset": 2048, 00:26:34.451 "data_size": 63488 00:26:34.451 }, 00:26:34.451 { 00:26:34.451 "name": "BaseBdev4", 00:26:34.451 "uuid": "47d7cb99-13e7-512b-9dd5-928cd14f0ecd", 00:26:34.451 "is_configured": true, 00:26:34.451 "data_offset": 2048, 00:26:34.451 "data_size": 63488 00:26:34.451 } 00:26:34.451 ] 00:26:34.451 }' 00:26:34.451 18:41:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:34.451 18:41:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:34.451 18:41:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:34.711 18:41:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:34.711 18:41:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 2922121 00:26:34.711 18:41:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@948 -- # '[' -z 2922121 ']' 00:26:34.711 18:41:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # kill -0 2922121 00:26:34.711 18:41:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # uname 00:26:34.711 18:41:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:34.711 18:41:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2922121 00:26:34.711 18:41:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:34.711 18:41:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:34.711 18:41:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2922121' 00:26:34.711 killing process with pid 2922121 00:26:34.711 18:41:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@967 -- # kill 2922121 00:26:34.711 Received shutdown signal, test time was about 29.137225 seconds 00:26:34.711 00:26:34.711 Latency(us) 00:26:34.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.711 =================================================================================================================== 00:26:34.711 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:34.711 [2024-07-15 18:41:20.061059] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:34.711 [2024-07-15 18:41:20.061161] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:34.711 [2024-07-15 18:41:20.061217] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:34.711 [2024-07-15 18:41:20.061228] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x188f370 name raid_bdev1, state offline 00:26:34.711 18:41:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # wait 2922121 00:26:34.711 [2024-07-15 18:41:20.102013] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:34.971 18:41:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:26:34.971 00:26:34.971 real 0m35.079s 00:26:34.971 user 0m57.248s 00:26:34.971 sys 0m4.290s 00:26:34.971 18:41:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:34.971 18:41:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:34.971 ************************************ 00:26:34.971 END TEST raid_rebuild_test_sb_io 00:26:34.971 ************************************ 00:26:34.971 18:41:20 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:26:34.971 18:41:20 bdev_raid -- bdev/bdev_raid.sh@884 -- # '[' n == y ']' 00:26:34.971 18:41:20 bdev_raid -- bdev/bdev_raid.sh@896 -- # base_blocklen=4096 00:26:34.971 18:41:20 bdev_raid -- bdev/bdev_raid.sh@898 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:26:34.971 18:41:20 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:26:34.971 18:41:20 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:34.971 18:41:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:34.971 ************************************ 00:26:34.971 START TEST raid_state_function_test_sb_4k 00:26:34.971 ************************************ 00:26:34.971 18:41:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:26:34.971 18:41:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:26:34.971 18:41:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:26:34.971 18:41:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:26:34.971 18:41:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:26:34.971 18:41:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:26:34.971 18:41:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:34.971 18:41:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:26:34.971 18:41:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:34.971 18:41:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:34.971 18:41:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:26:34.971 18:41:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:34.971 18:41:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:34.971 18:41:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:34.971 18:41:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:26:34.971 18:41:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:26:34.971 18:41:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:26:34.971 18:41:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:26:34.971 18:41:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:26:34.971 18:41:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:26:34.971 18:41:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:26:34.971 18:41:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:26:34.971 18:41:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:26:34.971 18:41:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=2927909 00:26:34.971 18:41:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2927909' 00:26:34.971 18:41:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:34.971 Process raid pid: 2927909 00:26:34.971 18:41:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 2927909 /var/tmp/spdk-raid.sock 00:26:34.971 18:41:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 2927909 ']' 00:26:34.971 18:41:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:34.971 18:41:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:34.971 18:41:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:34.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:34.971 18:41:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:34.971 18:41:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:34.971 [2024-07-15 18:41:20.421883] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:26:34.971 [2024-07-15 18:41:20.421942] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:34.971 [2024-07-15 18:41:20.520882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.230 [2024-07-15 18:41:20.614331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.230 [2024-07-15 18:41:20.671711] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:35.230 [2024-07-15 18:41:20.671739] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:36.165 18:41:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:36.165 18:41:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:26:36.165 18:41:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:26:36.165 [2024-07-15 18:41:21.610300] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:36.165 [2024-07-15 18:41:21.610340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:36.165 [2024-07-15 18:41:21.610349] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:36.165 [2024-07-15 18:41:21.610362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:36.165 18:41:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:36.165 18:41:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:36.166 18:41:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:36.166 18:41:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:36.166 18:41:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:36.166 18:41:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:36.166 18:41:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:36.166 18:41:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:36.166 18:41:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:36.166 18:41:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:36.166 18:41:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:36.166 18:41:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:36.423 18:41:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:36.423 "name": "Existed_Raid", 00:26:36.423 "uuid": "5e1b4029-e69c-447c-8103-066621cf2169", 00:26:36.423 "strip_size_kb": 0, 00:26:36.423 "state": "configuring", 00:26:36.423 "raid_level": "raid1", 00:26:36.423 "superblock": true, 00:26:36.423 "num_base_bdevs": 2, 00:26:36.423 "num_base_bdevs_discovered": 0, 00:26:36.423 "num_base_bdevs_operational": 2, 00:26:36.423 "base_bdevs_list": [ 00:26:36.423 { 00:26:36.423 "name": "BaseBdev1", 00:26:36.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:36.423 "is_configured": false, 00:26:36.423 "data_offset": 0, 00:26:36.423 "data_size": 0 00:26:36.423 }, 00:26:36.423 { 00:26:36.423 "name": "BaseBdev2", 00:26:36.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:36.423 "is_configured": false, 00:26:36.423 "data_offset": 0, 00:26:36.423 "data_size": 0 00:26:36.423 } 00:26:36.423 ] 00:26:36.423 }' 00:26:36.423 18:41:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:36.423 18:41:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:36.990 18:41:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:37.248 [2024-07-15 18:41:22.761234] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:37.248 [2024-07-15 18:41:22.761264] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2250b80 name Existed_Raid, state configuring 00:26:37.248 18:41:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:26:37.506 [2024-07-15 18:41:23.013919] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:37.506 [2024-07-15 18:41:23.013943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:37.506 [2024-07-15 18:41:23.013955] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:37.506 [2024-07-15 18:41:23.013963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:37.507 18:41:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:26:37.765 [2024-07-15 18:41:23.284010] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:37.765 BaseBdev1 00:26:37.765 18:41:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:26:37.765 18:41:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:26:37.765 18:41:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:37.765 18:41:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:26:37.765 18:41:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:37.765 18:41:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:37.765 18:41:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:38.022 18:41:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:38.279 [ 00:26:38.279 { 00:26:38.279 "name": "BaseBdev1", 00:26:38.279 "aliases": [ 00:26:38.279 "4a6b8f84-8c9a-460b-854a-499d72ccf992" 00:26:38.279 ], 00:26:38.279 "product_name": "Malloc disk", 00:26:38.279 "block_size": 4096, 00:26:38.279 "num_blocks": 8192, 00:26:38.279 "uuid": "4a6b8f84-8c9a-460b-854a-499d72ccf992", 00:26:38.279 "assigned_rate_limits": { 00:26:38.279 "rw_ios_per_sec": 0, 00:26:38.279 "rw_mbytes_per_sec": 0, 00:26:38.279 "r_mbytes_per_sec": 0, 00:26:38.279 "w_mbytes_per_sec": 0 00:26:38.279 }, 00:26:38.279 "claimed": true, 00:26:38.279 "claim_type": "exclusive_write", 00:26:38.279 "zoned": false, 00:26:38.279 "supported_io_types": { 00:26:38.279 "read": true, 00:26:38.279 "write": true, 00:26:38.279 "unmap": true, 00:26:38.279 "flush": true, 00:26:38.279 "reset": true, 00:26:38.279 "nvme_admin": false, 00:26:38.279 "nvme_io": false, 00:26:38.279 "nvme_io_md": false, 00:26:38.279 "write_zeroes": true, 00:26:38.279 "zcopy": true, 00:26:38.279 "get_zone_info": false, 00:26:38.279 "zone_management": false, 00:26:38.279 "zone_append": false, 00:26:38.279 "compare": false, 00:26:38.279 "compare_and_write": false, 00:26:38.279 "abort": true, 00:26:38.279 "seek_hole": false, 00:26:38.279 "seek_data": false, 00:26:38.279 "copy": true, 00:26:38.279 "nvme_iov_md": false 00:26:38.279 }, 00:26:38.279 "memory_domains": [ 00:26:38.279 { 00:26:38.279 "dma_device_id": "system", 00:26:38.279 "dma_device_type": 1 00:26:38.279 }, 00:26:38.279 { 00:26:38.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:38.279 "dma_device_type": 2 00:26:38.279 } 00:26:38.279 ], 00:26:38.279 "driver_specific": {} 00:26:38.279 } 00:26:38.279 ] 00:26:38.279 18:41:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:26:38.279 18:41:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:38.279 18:41:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:38.279 18:41:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:38.279 18:41:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:38.279 18:41:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:38.279 18:41:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:38.279 18:41:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:38.279 18:41:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:38.279 18:41:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:38.279 18:41:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:38.280 18:41:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:38.280 18:41:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:38.537 18:41:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:38.537 "name": "Existed_Raid", 00:26:38.537 "uuid": "97169352-c020-49d0-b498-8efb8f2ba4f6", 00:26:38.537 "strip_size_kb": 0, 00:26:38.537 "state": "configuring", 00:26:38.537 "raid_level": "raid1", 00:26:38.537 "superblock": true, 00:26:38.537 "num_base_bdevs": 2, 00:26:38.537 "num_base_bdevs_discovered": 1, 00:26:38.537 "num_base_bdevs_operational": 2, 00:26:38.537 "base_bdevs_list": [ 00:26:38.537 { 00:26:38.537 "name": "BaseBdev1", 00:26:38.537 "uuid": "4a6b8f84-8c9a-460b-854a-499d72ccf992", 00:26:38.537 "is_configured": true, 00:26:38.537 "data_offset": 256, 00:26:38.537 "data_size": 7936 00:26:38.537 }, 00:26:38.537 { 00:26:38.537 "name": "BaseBdev2", 00:26:38.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:38.537 "is_configured": false, 00:26:38.537 "data_offset": 0, 00:26:38.537 "data_size": 0 00:26:38.537 } 00:26:38.537 ] 00:26:38.537 }' 00:26:38.537 18:41:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:38.797 18:41:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:39.759 18:41:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:39.759 [2024-07-15 18:41:25.197313] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:39.760 [2024-07-15 18:41:25.197354] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2250470 name Existed_Raid, state configuring 00:26:39.760 18:41:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:26:40.018 [2024-07-15 18:41:25.458068] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:40.018 [2024-07-15 18:41:25.459602] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:40.018 [2024-07-15 18:41:25.459632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:40.018 18:41:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:26:40.018 18:41:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:40.018 18:41:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:40.018 18:41:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:40.018 18:41:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:40.018 18:41:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:40.018 18:41:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:40.018 18:41:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:40.018 18:41:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:40.018 18:41:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:40.018 18:41:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:40.018 18:41:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:40.018 18:41:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:40.018 18:41:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:40.277 18:41:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:40.277 "name": "Existed_Raid", 00:26:40.277 "uuid": "8d82fcf9-a4af-482a-bda3-c4feef78acaa", 00:26:40.277 "strip_size_kb": 0, 00:26:40.277 "state": "configuring", 00:26:40.277 "raid_level": "raid1", 00:26:40.277 "superblock": true, 00:26:40.277 "num_base_bdevs": 2, 00:26:40.277 "num_base_bdevs_discovered": 1, 00:26:40.277 "num_base_bdevs_operational": 2, 00:26:40.277 "base_bdevs_list": [ 00:26:40.277 { 00:26:40.277 "name": "BaseBdev1", 00:26:40.277 "uuid": "4a6b8f84-8c9a-460b-854a-499d72ccf992", 00:26:40.277 "is_configured": true, 00:26:40.277 "data_offset": 256, 00:26:40.277 "data_size": 7936 00:26:40.277 }, 00:26:40.277 { 00:26:40.277 "name": "BaseBdev2", 00:26:40.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:40.277 "is_configured": false, 00:26:40.277 "data_offset": 0, 00:26:40.277 "data_size": 0 00:26:40.277 } 00:26:40.277 ] 00:26:40.277 }' 00:26:40.277 18:41:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:40.277 18:41:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:41.213 18:41:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:26:41.213 [2024-07-15 18:41:26.652424] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:41.213 [2024-07-15 18:41:26.652576] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2251260 00:26:41.213 [2024-07-15 18:41:26.652590] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:41.213 [2024-07-15 18:41:26.652775] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22503c0 00:26:41.213 [2024-07-15 18:41:26.652899] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2251260 00:26:41.214 [2024-07-15 18:41:26.652909] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2251260 00:26:41.214 [2024-07-15 18:41:26.653024] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:41.214 BaseBdev2 00:26:41.214 18:41:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:26:41.214 18:41:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:26:41.214 18:41:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:41.214 18:41:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:26:41.214 18:41:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:41.214 18:41:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:41.214 18:41:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:41.472 18:41:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:41.731 [ 00:26:41.731 { 00:26:41.731 "name": "BaseBdev2", 00:26:41.731 "aliases": [ 00:26:41.731 "c144d301-ece2-4317-bf50-92b8f508dc85" 00:26:41.731 ], 00:26:41.731 "product_name": "Malloc disk", 00:26:41.731 "block_size": 4096, 00:26:41.731 "num_blocks": 8192, 00:26:41.731 "uuid": "c144d301-ece2-4317-bf50-92b8f508dc85", 00:26:41.731 "assigned_rate_limits": { 00:26:41.731 "rw_ios_per_sec": 0, 00:26:41.731 "rw_mbytes_per_sec": 0, 00:26:41.731 "r_mbytes_per_sec": 0, 00:26:41.731 "w_mbytes_per_sec": 0 00:26:41.731 }, 00:26:41.731 "claimed": true, 00:26:41.731 "claim_type": "exclusive_write", 00:26:41.731 "zoned": false, 00:26:41.731 "supported_io_types": { 00:26:41.731 "read": true, 00:26:41.731 "write": true, 00:26:41.731 "unmap": true, 00:26:41.731 "flush": true, 00:26:41.731 "reset": true, 00:26:41.731 "nvme_admin": false, 00:26:41.731 "nvme_io": false, 00:26:41.731 "nvme_io_md": false, 00:26:41.731 "write_zeroes": true, 00:26:41.731 "zcopy": true, 00:26:41.731 "get_zone_info": false, 00:26:41.731 "zone_management": false, 00:26:41.731 "zone_append": false, 00:26:41.731 "compare": false, 00:26:41.731 "compare_and_write": false, 00:26:41.731 "abort": true, 00:26:41.731 "seek_hole": false, 00:26:41.731 "seek_data": false, 00:26:41.731 "copy": true, 00:26:41.731 "nvme_iov_md": false 00:26:41.731 }, 00:26:41.731 "memory_domains": [ 00:26:41.731 { 00:26:41.731 "dma_device_id": "system", 00:26:41.731 "dma_device_type": 1 00:26:41.731 }, 00:26:41.731 { 00:26:41.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:41.731 "dma_device_type": 2 00:26:41.731 } 00:26:41.731 ], 00:26:41.731 "driver_specific": {} 00:26:41.731 } 00:26:41.731 ] 00:26:41.731 18:41:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:26:41.731 18:41:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:41.731 18:41:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:41.731 18:41:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:26:41.731 18:41:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:41.731 18:41:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:41.731 18:41:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:41.731 18:41:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:41.731 18:41:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:41.731 18:41:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:41.731 18:41:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:41.731 18:41:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:41.731 18:41:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:41.731 18:41:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:41.731 18:41:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:41.990 18:41:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:41.990 "name": "Existed_Raid", 00:26:41.990 "uuid": "8d82fcf9-a4af-482a-bda3-c4feef78acaa", 00:26:41.990 "strip_size_kb": 0, 00:26:41.990 "state": "online", 00:26:41.990 "raid_level": "raid1", 00:26:41.990 "superblock": true, 00:26:41.990 "num_base_bdevs": 2, 00:26:41.990 "num_base_bdevs_discovered": 2, 00:26:41.990 "num_base_bdevs_operational": 2, 00:26:41.990 "base_bdevs_list": [ 00:26:41.990 { 00:26:41.990 "name": "BaseBdev1", 00:26:41.990 "uuid": "4a6b8f84-8c9a-460b-854a-499d72ccf992", 00:26:41.990 "is_configured": true, 00:26:41.990 "data_offset": 256, 00:26:41.990 "data_size": 7936 00:26:41.990 }, 00:26:41.990 { 00:26:41.990 "name": "BaseBdev2", 00:26:41.990 "uuid": "c144d301-ece2-4317-bf50-92b8f508dc85", 00:26:41.990 "is_configured": true, 00:26:41.990 "data_offset": 256, 00:26:41.990 "data_size": 7936 00:26:41.990 } 00:26:41.990 ] 00:26:41.990 }' 00:26:41.990 18:41:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:41.990 18:41:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:42.558 18:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:26:42.558 18:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:42.558 18:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:42.558 18:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:42.558 18:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:42.558 18:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:26:42.558 18:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:42.558 18:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:42.817 [2024-07-15 18:41:28.313179] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:42.817 18:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:42.817 "name": "Existed_Raid", 00:26:42.817 "aliases": [ 00:26:42.817 "8d82fcf9-a4af-482a-bda3-c4feef78acaa" 00:26:42.817 ], 00:26:42.817 "product_name": "Raid Volume", 00:26:42.817 "block_size": 4096, 00:26:42.817 "num_blocks": 7936, 00:26:42.817 "uuid": "8d82fcf9-a4af-482a-bda3-c4feef78acaa", 00:26:42.817 "assigned_rate_limits": { 00:26:42.817 "rw_ios_per_sec": 0, 00:26:42.817 "rw_mbytes_per_sec": 0, 00:26:42.817 "r_mbytes_per_sec": 0, 00:26:42.817 "w_mbytes_per_sec": 0 00:26:42.817 }, 00:26:42.817 "claimed": false, 00:26:42.817 "zoned": false, 00:26:42.817 "supported_io_types": { 00:26:42.817 "read": true, 00:26:42.817 "write": true, 00:26:42.817 "unmap": false, 00:26:42.817 "flush": false, 00:26:42.817 "reset": true, 00:26:42.817 "nvme_admin": false, 00:26:42.817 "nvme_io": false, 00:26:42.817 "nvme_io_md": false, 00:26:42.817 "write_zeroes": true, 00:26:42.817 "zcopy": false, 00:26:42.817 "get_zone_info": false, 00:26:42.817 "zone_management": false, 00:26:42.817 "zone_append": false, 00:26:42.817 "compare": false, 00:26:42.817 "compare_and_write": false, 00:26:42.817 "abort": false, 00:26:42.817 "seek_hole": false, 00:26:42.817 "seek_data": false, 00:26:42.817 "copy": false, 00:26:42.817 "nvme_iov_md": false 00:26:42.817 }, 00:26:42.817 "memory_domains": [ 00:26:42.817 { 00:26:42.817 "dma_device_id": "system", 00:26:42.817 "dma_device_type": 1 00:26:42.817 }, 00:26:42.817 { 00:26:42.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:42.817 "dma_device_type": 2 00:26:42.817 }, 00:26:42.817 { 00:26:42.817 "dma_device_id": "system", 00:26:42.817 "dma_device_type": 1 00:26:42.817 }, 00:26:42.817 { 00:26:42.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:42.817 "dma_device_type": 2 00:26:42.817 } 00:26:42.817 ], 00:26:42.817 "driver_specific": { 00:26:42.817 "raid": { 00:26:42.817 "uuid": "8d82fcf9-a4af-482a-bda3-c4feef78acaa", 00:26:42.817 "strip_size_kb": 0, 00:26:42.817 "state": "online", 00:26:42.817 "raid_level": "raid1", 00:26:42.817 "superblock": true, 00:26:42.817 "num_base_bdevs": 2, 00:26:42.817 "num_base_bdevs_discovered": 2, 00:26:42.817 "num_base_bdevs_operational": 2, 00:26:42.817 "base_bdevs_list": [ 00:26:42.817 { 00:26:42.817 "name": "BaseBdev1", 00:26:42.817 "uuid": "4a6b8f84-8c9a-460b-854a-499d72ccf992", 00:26:42.817 "is_configured": true, 00:26:42.817 "data_offset": 256, 00:26:42.817 "data_size": 7936 00:26:42.817 }, 00:26:42.817 { 00:26:42.817 "name": "BaseBdev2", 00:26:42.817 "uuid": "c144d301-ece2-4317-bf50-92b8f508dc85", 00:26:42.817 "is_configured": true, 00:26:42.817 "data_offset": 256, 00:26:42.817 "data_size": 7936 00:26:42.817 } 00:26:42.817 ] 00:26:42.817 } 00:26:42.817 } 00:26:42.817 }' 00:26:42.817 18:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:43.076 18:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:26:43.076 BaseBdev2' 00:26:43.076 18:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:43.076 18:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:26:43.076 18:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:43.335 18:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:43.335 "name": "BaseBdev1", 00:26:43.335 "aliases": [ 00:26:43.335 "4a6b8f84-8c9a-460b-854a-499d72ccf992" 00:26:43.335 ], 00:26:43.335 "product_name": "Malloc disk", 00:26:43.335 "block_size": 4096, 00:26:43.335 "num_blocks": 8192, 00:26:43.335 "uuid": "4a6b8f84-8c9a-460b-854a-499d72ccf992", 00:26:43.335 "assigned_rate_limits": { 00:26:43.335 "rw_ios_per_sec": 0, 00:26:43.335 "rw_mbytes_per_sec": 0, 00:26:43.335 "r_mbytes_per_sec": 0, 00:26:43.335 "w_mbytes_per_sec": 0 00:26:43.335 }, 00:26:43.335 "claimed": true, 00:26:43.335 "claim_type": "exclusive_write", 00:26:43.335 "zoned": false, 00:26:43.335 "supported_io_types": { 00:26:43.335 "read": true, 00:26:43.335 "write": true, 00:26:43.335 "unmap": true, 00:26:43.335 "flush": true, 00:26:43.335 "reset": true, 00:26:43.335 "nvme_admin": false, 00:26:43.335 "nvme_io": false, 00:26:43.335 "nvme_io_md": false, 00:26:43.335 "write_zeroes": true, 00:26:43.335 "zcopy": true, 00:26:43.335 "get_zone_info": false, 00:26:43.335 "zone_management": false, 00:26:43.335 "zone_append": false, 00:26:43.335 "compare": false, 00:26:43.335 "compare_and_write": false, 00:26:43.335 "abort": true, 00:26:43.335 "seek_hole": false, 00:26:43.335 "seek_data": false, 00:26:43.335 "copy": true, 00:26:43.335 "nvme_iov_md": false 00:26:43.335 }, 00:26:43.335 "memory_domains": [ 00:26:43.335 { 00:26:43.335 "dma_device_id": "system", 00:26:43.335 "dma_device_type": 1 00:26:43.335 }, 00:26:43.335 { 00:26:43.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:43.335 "dma_device_type": 2 00:26:43.335 } 00:26:43.335 ], 00:26:43.335 "driver_specific": {} 00:26:43.335 }' 00:26:43.335 18:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:43.335 18:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:43.335 18:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:43.335 18:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:43.335 18:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:43.335 18:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:43.335 18:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:43.335 18:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:43.594 18:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:43.594 18:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:43.594 18:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:43.594 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:43.594 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:43.594 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:43.594 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:43.853 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:43.853 "name": "BaseBdev2", 00:26:43.853 "aliases": [ 00:26:43.853 "c144d301-ece2-4317-bf50-92b8f508dc85" 00:26:43.853 ], 00:26:43.853 "product_name": "Malloc disk", 00:26:43.853 "block_size": 4096, 00:26:43.853 "num_blocks": 8192, 00:26:43.853 "uuid": "c144d301-ece2-4317-bf50-92b8f508dc85", 00:26:43.853 "assigned_rate_limits": { 00:26:43.853 "rw_ios_per_sec": 0, 00:26:43.853 "rw_mbytes_per_sec": 0, 00:26:43.853 "r_mbytes_per_sec": 0, 00:26:43.853 "w_mbytes_per_sec": 0 00:26:43.853 }, 00:26:43.853 "claimed": true, 00:26:43.853 "claim_type": "exclusive_write", 00:26:43.853 "zoned": false, 00:26:43.853 "supported_io_types": { 00:26:43.853 "read": true, 00:26:43.853 "write": true, 00:26:43.853 "unmap": true, 00:26:43.853 "flush": true, 00:26:43.853 "reset": true, 00:26:43.853 "nvme_admin": false, 00:26:43.853 "nvme_io": false, 00:26:43.853 "nvme_io_md": false, 00:26:43.853 "write_zeroes": true, 00:26:43.853 "zcopy": true, 00:26:43.853 "get_zone_info": false, 00:26:43.853 "zone_management": false, 00:26:43.853 "zone_append": false, 00:26:43.853 "compare": false, 00:26:43.853 "compare_and_write": false, 00:26:43.853 "abort": true, 00:26:43.853 "seek_hole": false, 00:26:43.853 "seek_data": false, 00:26:43.853 "copy": true, 00:26:43.853 "nvme_iov_md": false 00:26:43.853 }, 00:26:43.853 "memory_domains": [ 00:26:43.853 { 00:26:43.853 "dma_device_id": "system", 00:26:43.853 "dma_device_type": 1 00:26:43.853 }, 00:26:43.853 { 00:26:43.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:43.853 "dma_device_type": 2 00:26:43.853 } 00:26:43.853 ], 00:26:43.853 "driver_specific": {} 00:26:43.853 }' 00:26:43.853 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:43.853 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:43.853 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:43.853 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:44.112 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:44.112 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:44.112 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:44.112 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:44.112 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:44.112 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:44.112 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:44.112 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:44.112 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:44.370 [2024-07-15 18:41:29.869133] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:44.370 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:26:44.370 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:26:44.370 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:44.370 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:26:44.370 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:26:44.370 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:26:44.370 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:44.370 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:44.370 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:44.370 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:44.370 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:44.370 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:44.370 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:44.370 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:44.370 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:44.370 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:44.370 18:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:44.628 18:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:44.628 "name": "Existed_Raid", 00:26:44.628 "uuid": "8d82fcf9-a4af-482a-bda3-c4feef78acaa", 00:26:44.628 "strip_size_kb": 0, 00:26:44.628 "state": "online", 00:26:44.628 "raid_level": "raid1", 00:26:44.628 "superblock": true, 00:26:44.628 "num_base_bdevs": 2, 00:26:44.628 "num_base_bdevs_discovered": 1, 00:26:44.628 "num_base_bdevs_operational": 1, 00:26:44.628 "base_bdevs_list": [ 00:26:44.628 { 00:26:44.628 "name": null, 00:26:44.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:44.628 "is_configured": false, 00:26:44.628 "data_offset": 256, 00:26:44.628 "data_size": 7936 00:26:44.628 }, 00:26:44.628 { 00:26:44.628 "name": "BaseBdev2", 00:26:44.628 "uuid": "c144d301-ece2-4317-bf50-92b8f508dc85", 00:26:44.628 "is_configured": true, 00:26:44.628 "data_offset": 256, 00:26:44.628 "data_size": 7936 00:26:44.628 } 00:26:44.628 ] 00:26:44.628 }' 00:26:44.629 18:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:44.629 18:41:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:45.564 18:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:26:45.564 18:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:45.564 18:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:45.564 18:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:45.564 18:41:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:45.564 18:41:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:45.564 18:41:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:45.823 [2024-07-15 18:41:31.270143] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:45.823 [2024-07-15 18:41:31.270229] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:45.823 [2024-07-15 18:41:31.280995] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:45.823 [2024-07-15 18:41:31.281029] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:45.823 [2024-07-15 18:41:31.281038] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2251260 name Existed_Raid, state offline 00:26:45.823 18:41:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:45.823 18:41:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:45.823 18:41:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:45.823 18:41:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:26:46.081 18:41:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:26:46.081 18:41:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:26:46.081 18:41:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:26:46.081 18:41:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 2927909 00:26:46.081 18:41:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 2927909 ']' 00:26:46.081 18:41:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 2927909 00:26:46.081 18:41:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:26:46.081 18:41:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:46.081 18:41:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2927909 00:26:46.340 18:41:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:46.340 18:41:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:46.340 18:41:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2927909' 00:26:46.340 killing process with pid 2927909 00:26:46.340 18:41:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@967 -- # kill 2927909 00:26:46.340 [2024-07-15 18:41:31.666879] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:46.340 18:41:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # wait 2927909 00:26:46.340 [2024-07-15 18:41:31.667730] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:46.340 18:41:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:26:46.340 00:26:46.340 real 0m11.508s 00:26:46.340 user 0m21.072s 00:26:46.340 sys 0m1.625s 00:26:46.340 18:41:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:46.340 18:41:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:46.340 ************************************ 00:26:46.340 END TEST raid_state_function_test_sb_4k 00:26:46.340 ************************************ 00:26:46.600 18:41:31 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:26:46.600 18:41:31 bdev_raid -- bdev/bdev_raid.sh@899 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:26:46.600 18:41:31 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:26:46.600 18:41:31 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:46.600 18:41:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:46.600 ************************************ 00:26:46.600 START TEST raid_superblock_test_4k 00:26:46.600 ************************************ 00:26:46.600 18:41:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:26:46.600 18:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:26:46.600 18:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:26:46.600 18:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:26:46.600 18:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:26:46.600 18:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:26:46.600 18:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:26:46.600 18:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:26:46.600 18:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:26:46.600 18:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:26:46.600 18:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local strip_size 00:26:46.600 18:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:26:46.600 18:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:26:46.600 18:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:26:46.600 18:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:26:46.600 18:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:26:46.600 18:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # raid_pid=2929828 00:26:46.600 18:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # waitforlisten 2929828 /var/tmp/spdk-raid.sock 00:26:46.600 18:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:26:46.600 18:41:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@829 -- # '[' -z 2929828 ']' 00:26:46.600 18:41:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:46.600 18:41:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:46.600 18:41:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:46.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:46.600 18:41:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:46.600 18:41:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:26:46.600 [2024-07-15 18:41:31.970138] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:26:46.600 [2024-07-15 18:41:31.970197] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2929828 ] 00:26:46.600 [2024-07-15 18:41:32.067092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.858 [2024-07-15 18:41:32.163564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.858 [2024-07-15 18:41:32.225562] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:46.858 [2024-07-15 18:41:32.225593] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:47.425 18:41:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:47.425 18:41:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # return 0 00:26:47.425 18:41:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:26:47.425 18:41:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:47.425 18:41:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:26:47.425 18:41:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:26:47.425 18:41:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:47.425 18:41:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:47.425 18:41:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:26:47.425 18:41:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:47.425 18:41:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:26:47.683 malloc1 00:26:47.683 18:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:47.940 [2024-07-15 18:41:33.423636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:47.940 [2024-07-15 18:41:33.423680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:47.940 [2024-07-15 18:41:33.423699] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb95e20 00:26:47.940 [2024-07-15 18:41:33.423708] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:47.940 [2024-07-15 18:41:33.425496] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:47.940 [2024-07-15 18:41:33.425522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:47.940 pt1 00:26:47.940 18:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:26:47.940 18:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:47.941 18:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:26:47.941 18:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:26:47.941 18:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:47.941 18:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:47.941 18:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:26:47.941 18:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:47.941 18:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:26:48.198 malloc2 00:26:48.198 18:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:48.455 [2024-07-15 18:41:33.937621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:48.455 [2024-07-15 18:41:33.937664] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:48.456 [2024-07-15 18:41:33.937678] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd3fed0 00:26:48.456 [2024-07-15 18:41:33.937686] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:48.456 [2024-07-15 18:41:33.939286] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:48.456 [2024-07-15 18:41:33.939311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:48.456 pt2 00:26:48.456 18:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:26:48.456 18:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:48.456 18:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:26:48.714 [2024-07-15 18:41:34.182280] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:48.714 [2024-07-15 18:41:34.183595] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:48.714 [2024-07-15 18:41:34.183742] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xd3f170 00:26:48.714 [2024-07-15 18:41:34.183753] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:48.714 [2024-07-15 18:41:34.183947] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd415d0 00:26:48.714 [2024-07-15 18:41:34.184108] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd3f170 00:26:48.714 [2024-07-15 18:41:34.184117] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xd3f170 00:26:48.714 [2024-07-15 18:41:34.184216] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:48.714 18:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:48.714 18:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:48.714 18:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:48.714 18:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:48.714 18:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:48.714 18:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:48.714 18:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:48.714 18:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:48.714 18:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:48.714 18:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:48.714 18:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:48.714 18:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:48.972 18:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:48.972 "name": "raid_bdev1", 00:26:48.972 "uuid": "d92bd562-d5c6-46a7-aa87-516a76dd43da", 00:26:48.972 "strip_size_kb": 0, 00:26:48.972 "state": "online", 00:26:48.972 "raid_level": "raid1", 00:26:48.972 "superblock": true, 00:26:48.972 "num_base_bdevs": 2, 00:26:48.972 "num_base_bdevs_discovered": 2, 00:26:48.972 "num_base_bdevs_operational": 2, 00:26:48.972 "base_bdevs_list": [ 00:26:48.972 { 00:26:48.972 "name": "pt1", 00:26:48.972 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:48.972 "is_configured": true, 00:26:48.972 "data_offset": 256, 00:26:48.972 "data_size": 7936 00:26:48.972 }, 00:26:48.972 { 00:26:48.972 "name": "pt2", 00:26:48.972 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:48.972 "is_configured": true, 00:26:48.972 "data_offset": 256, 00:26:48.972 "data_size": 7936 00:26:48.972 } 00:26:48.972 ] 00:26:48.972 }' 00:26:48.972 18:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:48.972 18:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:26:49.907 18:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:26:49.907 18:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:49.907 18:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:49.907 18:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:49.907 18:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:49.907 18:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:26:49.907 18:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:49.907 18:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:49.907 [2024-07-15 18:41:35.381748] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:49.907 18:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:49.907 "name": "raid_bdev1", 00:26:49.907 "aliases": [ 00:26:49.907 "d92bd562-d5c6-46a7-aa87-516a76dd43da" 00:26:49.907 ], 00:26:49.907 "product_name": "Raid Volume", 00:26:49.907 "block_size": 4096, 00:26:49.907 "num_blocks": 7936, 00:26:49.907 "uuid": "d92bd562-d5c6-46a7-aa87-516a76dd43da", 00:26:49.907 "assigned_rate_limits": { 00:26:49.907 "rw_ios_per_sec": 0, 00:26:49.907 "rw_mbytes_per_sec": 0, 00:26:49.907 "r_mbytes_per_sec": 0, 00:26:49.907 "w_mbytes_per_sec": 0 00:26:49.907 }, 00:26:49.907 "claimed": false, 00:26:49.907 "zoned": false, 00:26:49.907 "supported_io_types": { 00:26:49.907 "read": true, 00:26:49.907 "write": true, 00:26:49.907 "unmap": false, 00:26:49.907 "flush": false, 00:26:49.907 "reset": true, 00:26:49.907 "nvme_admin": false, 00:26:49.907 "nvme_io": false, 00:26:49.907 "nvme_io_md": false, 00:26:49.907 "write_zeroes": true, 00:26:49.907 "zcopy": false, 00:26:49.907 "get_zone_info": false, 00:26:49.907 "zone_management": false, 00:26:49.907 "zone_append": false, 00:26:49.907 "compare": false, 00:26:49.907 "compare_and_write": false, 00:26:49.907 "abort": false, 00:26:49.907 "seek_hole": false, 00:26:49.907 "seek_data": false, 00:26:49.907 "copy": false, 00:26:49.907 "nvme_iov_md": false 00:26:49.907 }, 00:26:49.907 "memory_domains": [ 00:26:49.907 { 00:26:49.907 "dma_device_id": "system", 00:26:49.907 "dma_device_type": 1 00:26:49.907 }, 00:26:49.907 { 00:26:49.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:49.907 "dma_device_type": 2 00:26:49.907 }, 00:26:49.907 { 00:26:49.907 "dma_device_id": "system", 00:26:49.907 "dma_device_type": 1 00:26:49.907 }, 00:26:49.907 { 00:26:49.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:49.907 "dma_device_type": 2 00:26:49.907 } 00:26:49.907 ], 00:26:49.907 "driver_specific": { 00:26:49.907 "raid": { 00:26:49.907 "uuid": "d92bd562-d5c6-46a7-aa87-516a76dd43da", 00:26:49.907 "strip_size_kb": 0, 00:26:49.907 "state": "online", 00:26:49.907 "raid_level": "raid1", 00:26:49.907 "superblock": true, 00:26:49.907 "num_base_bdevs": 2, 00:26:49.907 "num_base_bdevs_discovered": 2, 00:26:49.907 "num_base_bdevs_operational": 2, 00:26:49.907 "base_bdevs_list": [ 00:26:49.907 { 00:26:49.907 "name": "pt1", 00:26:49.907 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:49.907 "is_configured": true, 00:26:49.907 "data_offset": 256, 00:26:49.907 "data_size": 7936 00:26:49.907 }, 00:26:49.907 { 00:26:49.907 "name": "pt2", 00:26:49.907 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:49.907 "is_configured": true, 00:26:49.907 "data_offset": 256, 00:26:49.907 "data_size": 7936 00:26:49.907 } 00:26:49.907 ] 00:26:49.907 } 00:26:49.907 } 00:26:49.907 }' 00:26:49.907 18:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:49.907 18:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:49.907 pt2' 00:26:49.907 18:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:50.165 18:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:50.165 18:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:50.165 18:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:50.165 "name": "pt1", 00:26:50.165 "aliases": [ 00:26:50.165 "00000000-0000-0000-0000-000000000001" 00:26:50.165 ], 00:26:50.165 "product_name": "passthru", 00:26:50.165 "block_size": 4096, 00:26:50.165 "num_blocks": 8192, 00:26:50.165 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:50.165 "assigned_rate_limits": { 00:26:50.165 "rw_ios_per_sec": 0, 00:26:50.165 "rw_mbytes_per_sec": 0, 00:26:50.165 "r_mbytes_per_sec": 0, 00:26:50.165 "w_mbytes_per_sec": 0 00:26:50.165 }, 00:26:50.165 "claimed": true, 00:26:50.165 "claim_type": "exclusive_write", 00:26:50.165 "zoned": false, 00:26:50.165 "supported_io_types": { 00:26:50.165 "read": true, 00:26:50.165 "write": true, 00:26:50.165 "unmap": true, 00:26:50.165 "flush": true, 00:26:50.165 "reset": true, 00:26:50.165 "nvme_admin": false, 00:26:50.165 "nvme_io": false, 00:26:50.165 "nvme_io_md": false, 00:26:50.165 "write_zeroes": true, 00:26:50.165 "zcopy": true, 00:26:50.165 "get_zone_info": false, 00:26:50.165 "zone_management": false, 00:26:50.165 "zone_append": false, 00:26:50.165 "compare": false, 00:26:50.165 "compare_and_write": false, 00:26:50.165 "abort": true, 00:26:50.165 "seek_hole": false, 00:26:50.165 "seek_data": false, 00:26:50.165 "copy": true, 00:26:50.165 "nvme_iov_md": false 00:26:50.165 }, 00:26:50.165 "memory_domains": [ 00:26:50.165 { 00:26:50.165 "dma_device_id": "system", 00:26:50.165 "dma_device_type": 1 00:26:50.165 }, 00:26:50.165 { 00:26:50.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:50.165 "dma_device_type": 2 00:26:50.165 } 00:26:50.165 ], 00:26:50.165 "driver_specific": { 00:26:50.165 "passthru": { 00:26:50.165 "name": "pt1", 00:26:50.165 "base_bdev_name": "malloc1" 00:26:50.165 } 00:26:50.165 } 00:26:50.165 }' 00:26:50.423 18:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:50.423 18:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:50.423 18:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:50.423 18:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:50.423 18:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:50.423 18:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:50.423 18:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:50.681 18:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:50.681 18:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:50.681 18:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:50.681 18:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:50.681 18:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:50.681 18:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:50.681 18:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:50.681 18:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:50.939 18:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:50.939 "name": "pt2", 00:26:50.939 "aliases": [ 00:26:50.939 "00000000-0000-0000-0000-000000000002" 00:26:50.939 ], 00:26:50.939 "product_name": "passthru", 00:26:50.939 "block_size": 4096, 00:26:50.939 "num_blocks": 8192, 00:26:50.939 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:50.939 "assigned_rate_limits": { 00:26:50.939 "rw_ios_per_sec": 0, 00:26:50.939 "rw_mbytes_per_sec": 0, 00:26:50.939 "r_mbytes_per_sec": 0, 00:26:50.939 "w_mbytes_per_sec": 0 00:26:50.939 }, 00:26:50.939 "claimed": true, 00:26:50.939 "claim_type": "exclusive_write", 00:26:50.939 "zoned": false, 00:26:50.939 "supported_io_types": { 00:26:50.939 "read": true, 00:26:50.939 "write": true, 00:26:50.939 "unmap": true, 00:26:50.939 "flush": true, 00:26:50.939 "reset": true, 00:26:50.939 "nvme_admin": false, 00:26:50.939 "nvme_io": false, 00:26:50.939 "nvme_io_md": false, 00:26:50.939 "write_zeroes": true, 00:26:50.939 "zcopy": true, 00:26:50.939 "get_zone_info": false, 00:26:50.939 "zone_management": false, 00:26:50.939 "zone_append": false, 00:26:50.939 "compare": false, 00:26:50.939 "compare_and_write": false, 00:26:50.939 "abort": true, 00:26:50.939 "seek_hole": false, 00:26:50.939 "seek_data": false, 00:26:50.939 "copy": true, 00:26:50.939 "nvme_iov_md": false 00:26:50.939 }, 00:26:50.939 "memory_domains": [ 00:26:50.939 { 00:26:50.939 "dma_device_id": "system", 00:26:50.939 "dma_device_type": 1 00:26:50.939 }, 00:26:50.939 { 00:26:50.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:50.939 "dma_device_type": 2 00:26:50.939 } 00:26:50.939 ], 00:26:50.939 "driver_specific": { 00:26:50.939 "passthru": { 00:26:50.939 "name": "pt2", 00:26:50.939 "base_bdev_name": "malloc2" 00:26:50.939 } 00:26:50.939 } 00:26:50.939 }' 00:26:50.939 18:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:51.198 18:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:51.198 18:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:51.198 18:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:51.198 18:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:51.198 18:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:51.198 18:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:51.456 18:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:51.456 18:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:51.456 18:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:51.456 18:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:51.456 18:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:51.456 18:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:51.456 18:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:26:52.023 [2024-07-15 18:41:37.415232] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:52.023 18:41:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=d92bd562-d5c6-46a7-aa87-516a76dd43da 00:26:52.023 18:41:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # '[' -z d92bd562-d5c6-46a7-aa87-516a76dd43da ']' 00:26:52.024 18:41:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:52.283 [2024-07-15 18:41:37.683676] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:52.283 [2024-07-15 18:41:37.683695] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:52.283 [2024-07-15 18:41:37.683746] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:52.283 [2024-07-15 18:41:37.683800] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:52.283 [2024-07-15 18:41:37.683809] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd3f170 name raid_bdev1, state offline 00:26:52.283 18:41:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:52.283 18:41:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:26:52.542 18:41:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:26:52.542 18:41:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:26:52.542 18:41:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:26:52.542 18:41:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:52.801 18:41:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:26:52.801 18:41:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:53.060 18:41:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:26:53.060 18:41:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:53.673 18:41:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:26:53.673 18:41:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:26:53.673 18:41:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@648 -- # local es=0 00:26:53.673 18:41:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:26:53.673 18:41:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:53.673 18:41:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:53.673 18:41:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:53.673 18:41:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:53.673 18:41:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:53.673 18:41:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:53.674 18:41:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:53.674 18:41:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:26:53.674 18:41:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:26:53.932 [2024-07-15 18:41:39.440312] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:53.932 [2024-07-15 18:41:39.441728] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:53.932 [2024-07-15 18:41:39.441784] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:26:53.932 [2024-07-15 18:41:39.441820] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:26:53.932 [2024-07-15 18:41:39.441836] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:53.932 [2024-07-15 18:41:39.441843] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd40900 name raid_bdev1, state configuring 00:26:53.932 request: 00:26:53.932 { 00:26:53.932 "name": "raid_bdev1", 00:26:53.932 "raid_level": "raid1", 00:26:53.932 "base_bdevs": [ 00:26:53.932 "malloc1", 00:26:53.932 "malloc2" 00:26:53.932 ], 00:26:53.932 "superblock": false, 00:26:53.932 "method": "bdev_raid_create", 00:26:53.932 "req_id": 1 00:26:53.932 } 00:26:53.932 Got JSON-RPC error response 00:26:53.932 response: 00:26:53.932 { 00:26:53.932 "code": -17, 00:26:53.932 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:53.932 } 00:26:53.932 18:41:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # es=1 00:26:53.932 18:41:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:53.932 18:41:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:53.932 18:41:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:53.932 18:41:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:26:53.932 18:41:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:54.191 18:41:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:26:54.191 18:41:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:26:54.191 18:41:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:54.758 [2024-07-15 18:41:40.198258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:54.758 [2024-07-15 18:41:40.198305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:54.758 [2024-07-15 18:41:40.198321] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb94760 00:26:54.758 [2024-07-15 18:41:40.198330] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:54.758 [2024-07-15 18:41:40.199995] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:54.758 [2024-07-15 18:41:40.200019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:54.758 [2024-07-15 18:41:40.200084] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:54.758 [2024-07-15 18:41:40.200105] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:54.758 pt1 00:26:54.758 18:41:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:26:54.758 18:41:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:54.758 18:41:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:54.758 18:41:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:54.758 18:41:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:54.758 18:41:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:54.758 18:41:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:54.758 18:41:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:54.758 18:41:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:54.758 18:41:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:54.758 18:41:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:54.758 18:41:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:55.016 18:41:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:55.016 "name": "raid_bdev1", 00:26:55.016 "uuid": "d92bd562-d5c6-46a7-aa87-516a76dd43da", 00:26:55.016 "strip_size_kb": 0, 00:26:55.016 "state": "configuring", 00:26:55.016 "raid_level": "raid1", 00:26:55.016 "superblock": true, 00:26:55.016 "num_base_bdevs": 2, 00:26:55.016 "num_base_bdevs_discovered": 1, 00:26:55.016 "num_base_bdevs_operational": 2, 00:26:55.016 "base_bdevs_list": [ 00:26:55.016 { 00:26:55.016 "name": "pt1", 00:26:55.016 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:55.016 "is_configured": true, 00:26:55.016 "data_offset": 256, 00:26:55.016 "data_size": 7936 00:26:55.016 }, 00:26:55.016 { 00:26:55.016 "name": null, 00:26:55.016 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:55.016 "is_configured": false, 00:26:55.016 "data_offset": 256, 00:26:55.016 "data_size": 7936 00:26:55.016 } 00:26:55.016 ] 00:26:55.016 }' 00:26:55.016 18:41:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:55.016 18:41:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:26:55.956 18:41:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:26:55.956 18:41:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:26:55.956 18:41:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:26:55.956 18:41:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:55.956 [2024-07-15 18:41:41.465684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:55.956 [2024-07-15 18:41:41.465738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:55.956 [2024-07-15 18:41:41.465753] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb96a40 00:26:55.956 [2024-07-15 18:41:41.465762] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:55.956 [2024-07-15 18:41:41.466109] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:55.956 [2024-07-15 18:41:41.466125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:55.956 [2024-07-15 18:41:41.466182] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:55.956 [2024-07-15 18:41:41.466199] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:55.956 [2024-07-15 18:41:41.466295] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xd424e0 00:26:55.956 [2024-07-15 18:41:41.466304] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:55.956 [2024-07-15 18:41:41.466479] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd45cd0 00:26:55.956 [2024-07-15 18:41:41.466608] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd424e0 00:26:55.956 [2024-07-15 18:41:41.466616] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xd424e0 00:26:55.956 [2024-07-15 18:41:41.466715] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:55.956 pt2 00:26:55.956 18:41:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:26:55.956 18:41:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:26:55.956 18:41:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:55.956 18:41:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:55.956 18:41:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:55.956 18:41:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:55.956 18:41:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:55.956 18:41:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:55.956 18:41:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:55.956 18:41:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:55.956 18:41:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:55.956 18:41:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:55.956 18:41:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:55.956 18:41:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:56.214 18:41:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:56.214 "name": "raid_bdev1", 00:26:56.214 "uuid": "d92bd562-d5c6-46a7-aa87-516a76dd43da", 00:26:56.214 "strip_size_kb": 0, 00:26:56.214 "state": "online", 00:26:56.214 "raid_level": "raid1", 00:26:56.214 "superblock": true, 00:26:56.214 "num_base_bdevs": 2, 00:26:56.214 "num_base_bdevs_discovered": 2, 00:26:56.214 "num_base_bdevs_operational": 2, 00:26:56.214 "base_bdevs_list": [ 00:26:56.214 { 00:26:56.214 "name": "pt1", 00:26:56.214 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:56.214 "is_configured": true, 00:26:56.214 "data_offset": 256, 00:26:56.214 "data_size": 7936 00:26:56.214 }, 00:26:56.214 { 00:26:56.215 "name": "pt2", 00:26:56.215 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:56.215 "is_configured": true, 00:26:56.215 "data_offset": 256, 00:26:56.215 "data_size": 7936 00:26:56.215 } 00:26:56.215 ] 00:26:56.215 }' 00:26:56.215 18:41:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:56.215 18:41:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:26:56.781 18:41:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:26:56.781 18:41:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:56.781 18:41:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:56.781 18:41:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:56.781 18:41:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:56.781 18:41:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:26:56.781 18:41:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:56.781 18:41:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:57.040 [2024-07-15 18:41:42.444585] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:57.040 18:41:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:57.040 "name": "raid_bdev1", 00:26:57.040 "aliases": [ 00:26:57.040 "d92bd562-d5c6-46a7-aa87-516a76dd43da" 00:26:57.040 ], 00:26:57.040 "product_name": "Raid Volume", 00:26:57.040 "block_size": 4096, 00:26:57.040 "num_blocks": 7936, 00:26:57.040 "uuid": "d92bd562-d5c6-46a7-aa87-516a76dd43da", 00:26:57.040 "assigned_rate_limits": { 00:26:57.040 "rw_ios_per_sec": 0, 00:26:57.040 "rw_mbytes_per_sec": 0, 00:26:57.040 "r_mbytes_per_sec": 0, 00:26:57.040 "w_mbytes_per_sec": 0 00:26:57.040 }, 00:26:57.040 "claimed": false, 00:26:57.040 "zoned": false, 00:26:57.040 "supported_io_types": { 00:26:57.040 "read": true, 00:26:57.040 "write": true, 00:26:57.040 "unmap": false, 00:26:57.040 "flush": false, 00:26:57.040 "reset": true, 00:26:57.040 "nvme_admin": false, 00:26:57.040 "nvme_io": false, 00:26:57.040 "nvme_io_md": false, 00:26:57.040 "write_zeroes": true, 00:26:57.040 "zcopy": false, 00:26:57.040 "get_zone_info": false, 00:26:57.040 "zone_management": false, 00:26:57.040 "zone_append": false, 00:26:57.040 "compare": false, 00:26:57.040 "compare_and_write": false, 00:26:57.040 "abort": false, 00:26:57.040 "seek_hole": false, 00:26:57.040 "seek_data": false, 00:26:57.040 "copy": false, 00:26:57.040 "nvme_iov_md": false 00:26:57.040 }, 00:26:57.040 "memory_domains": [ 00:26:57.040 { 00:26:57.040 "dma_device_id": "system", 00:26:57.040 "dma_device_type": 1 00:26:57.040 }, 00:26:57.040 { 00:26:57.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:57.040 "dma_device_type": 2 00:26:57.040 }, 00:26:57.040 { 00:26:57.040 "dma_device_id": "system", 00:26:57.040 "dma_device_type": 1 00:26:57.040 }, 00:26:57.040 { 00:26:57.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:57.040 "dma_device_type": 2 00:26:57.040 } 00:26:57.040 ], 00:26:57.040 "driver_specific": { 00:26:57.040 "raid": { 00:26:57.040 "uuid": "d92bd562-d5c6-46a7-aa87-516a76dd43da", 00:26:57.040 "strip_size_kb": 0, 00:26:57.040 "state": "online", 00:26:57.040 "raid_level": "raid1", 00:26:57.040 "superblock": true, 00:26:57.040 "num_base_bdevs": 2, 00:26:57.040 "num_base_bdevs_discovered": 2, 00:26:57.040 "num_base_bdevs_operational": 2, 00:26:57.040 "base_bdevs_list": [ 00:26:57.040 { 00:26:57.040 "name": "pt1", 00:26:57.040 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:57.040 "is_configured": true, 00:26:57.040 "data_offset": 256, 00:26:57.040 "data_size": 7936 00:26:57.040 }, 00:26:57.040 { 00:26:57.040 "name": "pt2", 00:26:57.040 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:57.040 "is_configured": true, 00:26:57.040 "data_offset": 256, 00:26:57.040 "data_size": 7936 00:26:57.040 } 00:26:57.040 ] 00:26:57.040 } 00:26:57.040 } 00:26:57.040 }' 00:26:57.040 18:41:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:57.040 18:41:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:57.040 pt2' 00:26:57.040 18:41:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:57.040 18:41:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:57.040 18:41:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:57.299 18:41:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:57.299 "name": "pt1", 00:26:57.299 "aliases": [ 00:26:57.299 "00000000-0000-0000-0000-000000000001" 00:26:57.299 ], 00:26:57.299 "product_name": "passthru", 00:26:57.299 "block_size": 4096, 00:26:57.299 "num_blocks": 8192, 00:26:57.299 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:57.299 "assigned_rate_limits": { 00:26:57.299 "rw_ios_per_sec": 0, 00:26:57.299 "rw_mbytes_per_sec": 0, 00:26:57.299 "r_mbytes_per_sec": 0, 00:26:57.299 "w_mbytes_per_sec": 0 00:26:57.299 }, 00:26:57.299 "claimed": true, 00:26:57.299 "claim_type": "exclusive_write", 00:26:57.299 "zoned": false, 00:26:57.299 "supported_io_types": { 00:26:57.299 "read": true, 00:26:57.299 "write": true, 00:26:57.299 "unmap": true, 00:26:57.299 "flush": true, 00:26:57.299 "reset": true, 00:26:57.299 "nvme_admin": false, 00:26:57.299 "nvme_io": false, 00:26:57.299 "nvme_io_md": false, 00:26:57.299 "write_zeroes": true, 00:26:57.299 "zcopy": true, 00:26:57.299 "get_zone_info": false, 00:26:57.299 "zone_management": false, 00:26:57.299 "zone_append": false, 00:26:57.299 "compare": false, 00:26:57.299 "compare_and_write": false, 00:26:57.299 "abort": true, 00:26:57.299 "seek_hole": false, 00:26:57.299 "seek_data": false, 00:26:57.299 "copy": true, 00:26:57.299 "nvme_iov_md": false 00:26:57.299 }, 00:26:57.299 "memory_domains": [ 00:26:57.299 { 00:26:57.299 "dma_device_id": "system", 00:26:57.299 "dma_device_type": 1 00:26:57.299 }, 00:26:57.299 { 00:26:57.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:57.299 "dma_device_type": 2 00:26:57.299 } 00:26:57.299 ], 00:26:57.299 "driver_specific": { 00:26:57.299 "passthru": { 00:26:57.299 "name": "pt1", 00:26:57.299 "base_bdev_name": "malloc1" 00:26:57.299 } 00:26:57.299 } 00:26:57.299 }' 00:26:57.299 18:41:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:57.299 18:41:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:57.558 18:41:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:57.558 18:41:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:57.558 18:41:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:57.558 18:41:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:57.558 18:41:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:57.558 18:41:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:57.558 18:41:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:57.558 18:41:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:57.816 18:41:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:57.816 18:41:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:57.816 18:41:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:57.816 18:41:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:57.816 18:41:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:58.074 18:41:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:58.074 "name": "pt2", 00:26:58.074 "aliases": [ 00:26:58.074 "00000000-0000-0000-0000-000000000002" 00:26:58.074 ], 00:26:58.074 "product_name": "passthru", 00:26:58.074 "block_size": 4096, 00:26:58.074 "num_blocks": 8192, 00:26:58.074 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:58.074 "assigned_rate_limits": { 00:26:58.074 "rw_ios_per_sec": 0, 00:26:58.074 "rw_mbytes_per_sec": 0, 00:26:58.074 "r_mbytes_per_sec": 0, 00:26:58.074 "w_mbytes_per_sec": 0 00:26:58.074 }, 00:26:58.074 "claimed": true, 00:26:58.074 "claim_type": "exclusive_write", 00:26:58.074 "zoned": false, 00:26:58.074 "supported_io_types": { 00:26:58.074 "read": true, 00:26:58.074 "write": true, 00:26:58.074 "unmap": true, 00:26:58.074 "flush": true, 00:26:58.074 "reset": true, 00:26:58.074 "nvme_admin": false, 00:26:58.074 "nvme_io": false, 00:26:58.074 "nvme_io_md": false, 00:26:58.074 "write_zeroes": true, 00:26:58.074 "zcopy": true, 00:26:58.074 "get_zone_info": false, 00:26:58.074 "zone_management": false, 00:26:58.074 "zone_append": false, 00:26:58.074 "compare": false, 00:26:58.074 "compare_and_write": false, 00:26:58.074 "abort": true, 00:26:58.074 "seek_hole": false, 00:26:58.074 "seek_data": false, 00:26:58.074 "copy": true, 00:26:58.074 "nvme_iov_md": false 00:26:58.074 }, 00:26:58.074 "memory_domains": [ 00:26:58.074 { 00:26:58.074 "dma_device_id": "system", 00:26:58.074 "dma_device_type": 1 00:26:58.074 }, 00:26:58.074 { 00:26:58.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:58.074 "dma_device_type": 2 00:26:58.074 } 00:26:58.074 ], 00:26:58.074 "driver_specific": { 00:26:58.074 "passthru": { 00:26:58.074 "name": "pt2", 00:26:58.074 "base_bdev_name": "malloc2" 00:26:58.074 } 00:26:58.074 } 00:26:58.074 }' 00:26:58.074 18:41:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:58.074 18:41:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:58.074 18:41:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:58.074 18:41:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:58.074 18:41:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:58.332 18:41:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:58.332 18:41:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:58.332 18:41:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:58.332 18:41:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:58.332 18:41:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:58.332 18:41:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:58.589 18:41:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:58.590 18:41:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:58.590 18:41:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:26:58.847 [2024-07-15 18:41:44.349736] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:58.847 18:41:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # '[' d92bd562-d5c6-46a7-aa87-516a76dd43da '!=' d92bd562-d5c6-46a7-aa87-516a76dd43da ']' 00:26:58.847 18:41:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:26:58.847 18:41:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:58.847 18:41:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:26:58.847 18:41:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:59.106 [2024-07-15 18:41:44.618214] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:26:59.106 18:41:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:59.106 18:41:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:59.106 18:41:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:59.106 18:41:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:59.106 18:41:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:59.106 18:41:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:59.106 18:41:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:59.106 18:41:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:59.106 18:41:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:59.106 18:41:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:59.106 18:41:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:59.106 18:41:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:59.363 18:41:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:59.363 "name": "raid_bdev1", 00:26:59.363 "uuid": "d92bd562-d5c6-46a7-aa87-516a76dd43da", 00:26:59.363 "strip_size_kb": 0, 00:26:59.363 "state": "online", 00:26:59.363 "raid_level": "raid1", 00:26:59.363 "superblock": true, 00:26:59.363 "num_base_bdevs": 2, 00:26:59.363 "num_base_bdevs_discovered": 1, 00:26:59.363 "num_base_bdevs_operational": 1, 00:26:59.363 "base_bdevs_list": [ 00:26:59.363 { 00:26:59.363 "name": null, 00:26:59.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:59.363 "is_configured": false, 00:26:59.363 "data_offset": 256, 00:26:59.363 "data_size": 7936 00:26:59.363 }, 00:26:59.363 { 00:26:59.363 "name": "pt2", 00:26:59.363 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:59.363 "is_configured": true, 00:26:59.363 "data_offset": 256, 00:26:59.363 "data_size": 7936 00:26:59.363 } 00:26:59.363 ] 00:26:59.363 }' 00:26:59.363 18:41:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:59.363 18:41:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:00.297 18:41:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:00.297 [2024-07-15 18:41:45.765287] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:00.297 [2024-07-15 18:41:45.765310] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:00.297 [2024-07-15 18:41:45.765357] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:00.297 [2024-07-15 18:41:45.765401] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:00.297 [2024-07-15 18:41:45.765410] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd424e0 name raid_bdev1, state offline 00:27:00.297 18:41:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:00.297 18:41:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:27:00.556 18:41:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:27:00.556 18:41:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:27:00.556 18:41:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:27:00.556 18:41:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:27:00.556 18:41:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:00.814 18:41:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:27:00.814 18:41:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:27:00.814 18:41:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:27:00.814 18:41:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:27:00.814 18:41:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@518 -- # i=1 00:27:00.814 18:41:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:01.073 [2024-07-15 18:41:46.543334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:01.073 [2024-07-15 18:41:46.543374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:01.073 [2024-07-15 18:41:46.543391] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb96050 00:27:01.073 [2024-07-15 18:41:46.543401] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:01.073 [2024-07-15 18:41:46.545055] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:01.073 [2024-07-15 18:41:46.545081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:01.073 [2024-07-15 18:41:46.545140] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:01.073 [2024-07-15 18:41:46.545162] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:01.073 [2024-07-15 18:41:46.545242] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xd46f20 00:27:01.073 [2024-07-15 18:41:46.545251] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:01.073 [2024-07-15 18:41:46.545426] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb97420 00:27:01.073 [2024-07-15 18:41:46.545551] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd46f20 00:27:01.073 [2024-07-15 18:41:46.545560] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xd46f20 00:27:01.073 [2024-07-15 18:41:46.545658] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:01.073 pt2 00:27:01.073 18:41:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:01.073 18:41:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:01.073 18:41:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:01.073 18:41:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:01.073 18:41:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:01.073 18:41:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:01.073 18:41:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:01.073 18:41:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:01.073 18:41:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:01.073 18:41:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:01.073 18:41:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:01.073 18:41:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:01.332 18:41:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:01.332 "name": "raid_bdev1", 00:27:01.332 "uuid": "d92bd562-d5c6-46a7-aa87-516a76dd43da", 00:27:01.332 "strip_size_kb": 0, 00:27:01.332 "state": "online", 00:27:01.332 "raid_level": "raid1", 00:27:01.332 "superblock": true, 00:27:01.332 "num_base_bdevs": 2, 00:27:01.332 "num_base_bdevs_discovered": 1, 00:27:01.332 "num_base_bdevs_operational": 1, 00:27:01.332 "base_bdevs_list": [ 00:27:01.332 { 00:27:01.332 "name": null, 00:27:01.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:01.332 "is_configured": false, 00:27:01.333 "data_offset": 256, 00:27:01.333 "data_size": 7936 00:27:01.333 }, 00:27:01.333 { 00:27:01.333 "name": "pt2", 00:27:01.333 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:01.333 "is_configured": true, 00:27:01.333 "data_offset": 256, 00:27:01.333 "data_size": 7936 00:27:01.333 } 00:27:01.333 ] 00:27:01.333 }' 00:27:01.333 18:41:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:01.333 18:41:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:01.899 18:41:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:02.158 [2024-07-15 18:41:47.622232] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:02.158 [2024-07-15 18:41:47.622255] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:02.158 [2024-07-15 18:41:47.622301] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:02.158 [2024-07-15 18:41:47.622342] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:02.158 [2024-07-15 18:41:47.622351] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd46f20 name raid_bdev1, state offline 00:27:02.158 18:41:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:02.158 18:41:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:27:02.416 18:41:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:27:02.416 18:41:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:27:02.416 18:41:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:27:02.416 18:41:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:02.676 [2024-07-15 18:41:48.139605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:02.676 [2024-07-15 18:41:48.139647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:02.676 [2024-07-15 18:41:48.139661] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd40100 00:27:02.676 [2024-07-15 18:41:48.139670] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:02.676 [2024-07-15 18:41:48.141322] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:02.676 [2024-07-15 18:41:48.141348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:02.676 [2024-07-15 18:41:48.141406] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:02.676 [2024-07-15 18:41:48.141428] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:02.676 [2024-07-15 18:41:48.141526] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:27:02.676 [2024-07-15 18:41:48.141542] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:02.676 [2024-07-15 18:41:48.141554] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd44df0 name raid_bdev1, state configuring 00:27:02.676 [2024-07-15 18:41:48.141574] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:02.676 [2024-07-15 18:41:48.141629] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xd45900 00:27:02.676 [2024-07-15 18:41:48.141637] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:02.676 [2024-07-15 18:41:48.141811] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd40be0 00:27:02.676 [2024-07-15 18:41:48.141937] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd45900 00:27:02.676 [2024-07-15 18:41:48.141945] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xd45900 00:27:02.676 [2024-07-15 18:41:48.142056] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:02.676 pt1 00:27:02.676 18:41:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:27:02.676 18:41:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:02.676 18:41:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:02.676 18:41:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:02.676 18:41:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:02.676 18:41:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:02.676 18:41:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:02.676 18:41:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:02.676 18:41:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:02.676 18:41:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:02.676 18:41:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:02.676 18:41:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:02.676 18:41:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:02.935 18:41:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:02.935 "name": "raid_bdev1", 00:27:02.935 "uuid": "d92bd562-d5c6-46a7-aa87-516a76dd43da", 00:27:02.935 "strip_size_kb": 0, 00:27:02.935 "state": "online", 00:27:02.935 "raid_level": "raid1", 00:27:02.935 "superblock": true, 00:27:02.935 "num_base_bdevs": 2, 00:27:02.935 "num_base_bdevs_discovered": 1, 00:27:02.935 "num_base_bdevs_operational": 1, 00:27:02.935 "base_bdevs_list": [ 00:27:02.935 { 00:27:02.935 "name": null, 00:27:02.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:02.935 "is_configured": false, 00:27:02.935 "data_offset": 256, 00:27:02.935 "data_size": 7936 00:27:02.935 }, 00:27:02.935 { 00:27:02.935 "name": "pt2", 00:27:02.935 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:02.935 "is_configured": true, 00:27:02.935 "data_offset": 256, 00:27:02.935 "data_size": 7936 00:27:02.935 } 00:27:02.935 ] 00:27:02.935 }' 00:27:02.935 18:41:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:02.935 18:41:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:03.872 18:41:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:27:03.872 18:41:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:27:03.872 18:41:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:27:03.872 18:41:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:03.872 18:41:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:27:04.131 [2024-07-15 18:41:49.595769] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:04.131 18:41:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' d92bd562-d5c6-46a7-aa87-516a76dd43da '!=' d92bd562-d5c6-46a7-aa87-516a76dd43da ']' 00:27:04.131 18:41:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@562 -- # killprocess 2929828 00:27:04.131 18:41:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@948 -- # '[' -z 2929828 ']' 00:27:04.131 18:41:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # kill -0 2929828 00:27:04.131 18:41:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # uname 00:27:04.131 18:41:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:04.131 18:41:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2929828 00:27:04.131 18:41:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:04.131 18:41:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:04.131 18:41:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2929828' 00:27:04.131 killing process with pid 2929828 00:27:04.131 18:41:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@967 -- # kill 2929828 00:27:04.131 [2024-07-15 18:41:49.659211] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:04.131 [2024-07-15 18:41:49.659266] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:04.131 [2024-07-15 18:41:49.659307] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:04.131 [2024-07-15 18:41:49.659316] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd45900 name raid_bdev1, state offline 00:27:04.131 18:41:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # wait 2929828 00:27:04.131 [2024-07-15 18:41:49.675964] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:04.391 18:41:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@564 -- # return 0 00:27:04.391 00:27:04.391 real 0m17.965s 00:27:04.391 user 0m33.652s 00:27:04.391 sys 0m2.399s 00:27:04.391 18:41:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:04.391 18:41:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:04.391 ************************************ 00:27:04.391 END TEST raid_superblock_test_4k 00:27:04.391 ************************************ 00:27:04.391 18:41:49 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:04.391 18:41:49 bdev_raid -- bdev/bdev_raid.sh@900 -- # '[' true = true ']' 00:27:04.391 18:41:49 bdev_raid -- bdev/bdev_raid.sh@901 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:27:04.391 18:41:49 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:27:04.391 18:41:49 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:04.391 18:41:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:04.391 ************************************ 00:27:04.391 START TEST raid_rebuild_test_sb_4k 00:27:04.391 ************************************ 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local verify=true 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local strip_size 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local create_arg 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local data_offset 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # raid_pid=2932899 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # waitforlisten 2932899 /var/tmp/spdk-raid.sock 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 2932899 ']' 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:04.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:04.391 18:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:04.650 [2024-07-15 18:41:49.983484] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:27:04.650 [2024-07-15 18:41:49.983548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2932899 ] 00:27:04.650 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:04.650 Zero copy mechanism will not be used. 00:27:04.650 [2024-07-15 18:41:50.083374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.650 [2024-07-15 18:41:50.178463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.907 [2024-07-15 18:41:50.237132] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:04.907 [2024-07-15 18:41:50.237163] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:05.473 18:41:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:05.473 18:41:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:27:05.473 18:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:05.473 18:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:27:05.731 BaseBdev1_malloc 00:27:05.731 18:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:05.990 [2024-07-15 18:41:51.438118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:05.990 [2024-07-15 18:41:51.438164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:05.990 [2024-07-15 18:41:51.438183] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2496130 00:27:05.990 [2024-07-15 18:41:51.438192] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:05.990 [2024-07-15 18:41:51.439785] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:05.990 [2024-07-15 18:41:51.439810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:05.990 BaseBdev1 00:27:05.990 18:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:05.990 18:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:27:06.248 BaseBdev2_malloc 00:27:06.248 18:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:06.506 [2024-07-15 18:41:51.959844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:06.506 [2024-07-15 18:41:51.959883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:06.506 [2024-07-15 18:41:51.959899] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x263bfa0 00:27:06.506 [2024-07-15 18:41:51.959908] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:06.506 [2024-07-15 18:41:51.961352] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:06.507 [2024-07-15 18:41:51.961376] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:06.507 BaseBdev2 00:27:06.507 18:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b spare_malloc 00:27:06.765 spare_malloc 00:27:06.765 18:41:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:07.024 spare_delay 00:27:07.025 18:41:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:07.284 [2024-07-15 18:41:52.746298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:07.284 [2024-07-15 18:41:52.746335] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:07.284 [2024-07-15 18:41:52.746350] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x263df40 00:27:07.284 [2024-07-15 18:41:52.746359] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:07.284 [2024-07-15 18:41:52.747837] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:07.284 [2024-07-15 18:41:52.747862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:07.284 spare 00:27:07.284 18:41:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:27:07.542 [2024-07-15 18:41:53.003005] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:07.543 [2024-07-15 18:41:53.004232] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:07.543 [2024-07-15 18:41:53.004386] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x263f370 00:27:07.543 [2024-07-15 18:41:53.004398] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:07.543 [2024-07-15 18:41:53.004576] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x263e1d0 00:27:07.543 [2024-07-15 18:41:53.004717] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x263f370 00:27:07.543 [2024-07-15 18:41:53.004729] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x263f370 00:27:07.543 [2024-07-15 18:41:53.004821] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:07.543 18:41:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:07.543 18:41:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:07.543 18:41:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:07.543 18:41:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:07.543 18:41:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:07.543 18:41:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:07.543 18:41:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:07.543 18:41:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:07.543 18:41:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:07.543 18:41:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:07.543 18:41:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:07.543 18:41:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:07.801 18:41:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:07.801 "name": "raid_bdev1", 00:27:07.801 "uuid": "739a521d-9a63-4cf8-a75f-3881861786c4", 00:27:07.801 "strip_size_kb": 0, 00:27:07.801 "state": "online", 00:27:07.801 "raid_level": "raid1", 00:27:07.801 "superblock": true, 00:27:07.801 "num_base_bdevs": 2, 00:27:07.801 "num_base_bdevs_discovered": 2, 00:27:07.801 "num_base_bdevs_operational": 2, 00:27:07.801 "base_bdevs_list": [ 00:27:07.801 { 00:27:07.801 "name": "BaseBdev1", 00:27:07.801 "uuid": "1688f609-4446-5698-b871-32846969a522", 00:27:07.801 "is_configured": true, 00:27:07.801 "data_offset": 256, 00:27:07.801 "data_size": 7936 00:27:07.801 }, 00:27:07.801 { 00:27:07.801 "name": "BaseBdev2", 00:27:07.801 "uuid": "fc4959f9-54b7-5bb2-814a-9fa9f8712f5e", 00:27:07.801 "is_configured": true, 00:27:07.801 "data_offset": 256, 00:27:07.801 "data_size": 7936 00:27:07.801 } 00:27:07.801 ] 00:27:07.801 }' 00:27:07.801 18:41:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:07.801 18:41:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:08.763 18:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:08.763 18:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:27:08.763 [2024-07-15 18:41:54.246628] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:08.763 18:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:27:08.763 18:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:08.763 18:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:09.329 18:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:27:09.329 18:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:27:09.329 18:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:27:09.329 18:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:27:09.329 18:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:27:09.329 18:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:09.329 18:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:09.329 18:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:09.329 18:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:09.329 18:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:09.329 18:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:27:09.329 18:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:09.329 18:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:09.329 18:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:09.586 [2024-07-15 18:41:55.000447] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x263e1d0 00:27:09.586 /dev/nbd0 00:27:09.586 18:41:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:09.586 18:41:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:09.586 18:41:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:09.586 18:41:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:27:09.586 18:41:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:09.586 18:41:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:09.586 18:41:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:09.586 18:41:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:27:09.586 18:41:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:09.586 18:41:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:09.586 18:41:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:09.586 1+0 records in 00:27:09.586 1+0 records out 00:27:09.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224839 s, 18.2 MB/s 00:27:09.586 18:41:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:09.586 18:41:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:27:09.586 18:41:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:09.586 18:41:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:09.586 18:41:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:27:09.586 18:41:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:09.586 18:41:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:09.586 18:41:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:27:09.586 18:41:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:27:09.586 18:41:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:27:10.519 7936+0 records in 00:27:10.519 7936+0 records out 00:27:10.519 32505856 bytes (33 MB, 31 MiB) copied, 0.72894 s, 44.6 MB/s 00:27:10.519 18:41:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:10.519 18:41:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:10.519 18:41:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:10.519 18:41:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:10.519 18:41:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:27:10.519 18:41:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:10.519 18:41:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:10.519 [2024-07-15 18:41:56.066238] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:10.519 18:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:10.519 18:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:10.519 18:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:10.777 18:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:10.777 18:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:10.777 18:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:10.777 18:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:27:10.777 18:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:27:10.777 18:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:10.777 [2024-07-15 18:41:56.310929] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:11.035 18:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:11.035 18:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:11.035 18:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:11.035 18:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:11.035 18:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:11.035 18:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:11.035 18:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:11.035 18:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:11.035 18:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:11.035 18:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:11.035 18:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:11.035 18:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:11.293 18:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:11.293 "name": "raid_bdev1", 00:27:11.293 "uuid": "739a521d-9a63-4cf8-a75f-3881861786c4", 00:27:11.293 "strip_size_kb": 0, 00:27:11.293 "state": "online", 00:27:11.293 "raid_level": "raid1", 00:27:11.293 "superblock": true, 00:27:11.293 "num_base_bdevs": 2, 00:27:11.293 "num_base_bdevs_discovered": 1, 00:27:11.293 "num_base_bdevs_operational": 1, 00:27:11.293 "base_bdevs_list": [ 00:27:11.294 { 00:27:11.294 "name": null, 00:27:11.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:11.294 "is_configured": false, 00:27:11.294 "data_offset": 256, 00:27:11.294 "data_size": 7936 00:27:11.294 }, 00:27:11.294 { 00:27:11.294 "name": "BaseBdev2", 00:27:11.294 "uuid": "fc4959f9-54b7-5bb2-814a-9fa9f8712f5e", 00:27:11.294 "is_configured": true, 00:27:11.294 "data_offset": 256, 00:27:11.294 "data_size": 7936 00:27:11.294 } 00:27:11.294 ] 00:27:11.294 }' 00:27:11.294 18:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:11.294 18:41:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:11.860 18:41:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:12.118 [2024-07-15 18:41:57.437966] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:12.118 [2024-07-15 18:41:57.442785] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2647cf0 00:27:12.118 [2024-07-15 18:41:57.444843] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:12.118 18:41:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # sleep 1 00:27:13.054 18:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:13.054 18:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:13.054 18:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:13.054 18:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:13.054 18:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:13.054 18:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.054 18:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:13.312 18:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:13.312 "name": "raid_bdev1", 00:27:13.312 "uuid": "739a521d-9a63-4cf8-a75f-3881861786c4", 00:27:13.312 "strip_size_kb": 0, 00:27:13.312 "state": "online", 00:27:13.312 "raid_level": "raid1", 00:27:13.312 "superblock": true, 00:27:13.312 "num_base_bdevs": 2, 00:27:13.312 "num_base_bdevs_discovered": 2, 00:27:13.312 "num_base_bdevs_operational": 2, 00:27:13.312 "process": { 00:27:13.312 "type": "rebuild", 00:27:13.312 "target": "spare", 00:27:13.312 "progress": { 00:27:13.312 "blocks": 3072, 00:27:13.312 "percent": 38 00:27:13.312 } 00:27:13.312 }, 00:27:13.312 "base_bdevs_list": [ 00:27:13.312 { 00:27:13.312 "name": "spare", 00:27:13.312 "uuid": "3574ccb1-3fe9-51e4-a1fa-844163b56ce3", 00:27:13.312 "is_configured": true, 00:27:13.312 "data_offset": 256, 00:27:13.312 "data_size": 7936 00:27:13.312 }, 00:27:13.312 { 00:27:13.312 "name": "BaseBdev2", 00:27:13.312 "uuid": "fc4959f9-54b7-5bb2-814a-9fa9f8712f5e", 00:27:13.312 "is_configured": true, 00:27:13.312 "data_offset": 256, 00:27:13.312 "data_size": 7936 00:27:13.312 } 00:27:13.312 ] 00:27:13.312 }' 00:27:13.312 18:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:13.312 18:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:13.312 18:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:13.312 18:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:13.312 18:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:13.878 [2024-07-15 18:41:59.320389] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:13.878 [2024-07-15 18:41:59.359122] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:13.878 [2024-07-15 18:41:59.359166] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:13.878 [2024-07-15 18:41:59.359181] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:13.878 [2024-07-15 18:41:59.359187] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:13.878 18:41:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:13.878 18:41:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:13.878 18:41:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:13.878 18:41:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:13.878 18:41:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:13.878 18:41:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:13.878 18:41:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:13.878 18:41:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:13.878 18:41:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:13.878 18:41:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:13.878 18:41:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.878 18:41:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:14.444 18:41:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:14.444 "name": "raid_bdev1", 00:27:14.444 "uuid": "739a521d-9a63-4cf8-a75f-3881861786c4", 00:27:14.444 "strip_size_kb": 0, 00:27:14.444 "state": "online", 00:27:14.444 "raid_level": "raid1", 00:27:14.444 "superblock": true, 00:27:14.444 "num_base_bdevs": 2, 00:27:14.444 "num_base_bdevs_discovered": 1, 00:27:14.444 "num_base_bdevs_operational": 1, 00:27:14.444 "base_bdevs_list": [ 00:27:14.444 { 00:27:14.444 "name": null, 00:27:14.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:14.444 "is_configured": false, 00:27:14.444 "data_offset": 256, 00:27:14.444 "data_size": 7936 00:27:14.444 }, 00:27:14.444 { 00:27:14.444 "name": "BaseBdev2", 00:27:14.444 "uuid": "fc4959f9-54b7-5bb2-814a-9fa9f8712f5e", 00:27:14.444 "is_configured": true, 00:27:14.444 "data_offset": 256, 00:27:14.444 "data_size": 7936 00:27:14.444 } 00:27:14.444 ] 00:27:14.444 }' 00:27:14.444 18:41:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:14.444 18:41:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:15.379 18:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:15.379 18:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:15.379 18:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:15.379 18:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:15.379 18:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:15.379 18:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:15.379 18:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:15.637 18:42:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:15.637 "name": "raid_bdev1", 00:27:15.637 "uuid": "739a521d-9a63-4cf8-a75f-3881861786c4", 00:27:15.637 "strip_size_kb": 0, 00:27:15.637 "state": "online", 00:27:15.637 "raid_level": "raid1", 00:27:15.637 "superblock": true, 00:27:15.637 "num_base_bdevs": 2, 00:27:15.637 "num_base_bdevs_discovered": 1, 00:27:15.637 "num_base_bdevs_operational": 1, 00:27:15.637 "base_bdevs_list": [ 00:27:15.637 { 00:27:15.637 "name": null, 00:27:15.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:15.637 "is_configured": false, 00:27:15.637 "data_offset": 256, 00:27:15.637 "data_size": 7936 00:27:15.637 }, 00:27:15.637 { 00:27:15.637 "name": "BaseBdev2", 00:27:15.637 "uuid": "fc4959f9-54b7-5bb2-814a-9fa9f8712f5e", 00:27:15.637 "is_configured": true, 00:27:15.637 "data_offset": 256, 00:27:15.637 "data_size": 7936 00:27:15.637 } 00:27:15.637 ] 00:27:15.637 }' 00:27:15.637 18:42:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:15.637 18:42:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:15.637 18:42:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:15.637 18:42:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:15.637 18:42:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:16.203 [2024-07-15 18:42:01.593529] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:16.203 [2024-07-15 18:42:01.598368] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2647cf0 00:27:16.203 [2024-07-15 18:42:01.599878] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:16.203 18:42:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:17.138 18:42:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:17.138 18:42:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:17.138 18:42:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:17.138 18:42:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:17.138 18:42:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:17.138 18:42:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:17.138 18:42:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:17.705 18:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:17.705 "name": "raid_bdev1", 00:27:17.706 "uuid": "739a521d-9a63-4cf8-a75f-3881861786c4", 00:27:17.706 "strip_size_kb": 0, 00:27:17.706 "state": "online", 00:27:17.706 "raid_level": "raid1", 00:27:17.706 "superblock": true, 00:27:17.706 "num_base_bdevs": 2, 00:27:17.706 "num_base_bdevs_discovered": 2, 00:27:17.706 "num_base_bdevs_operational": 2, 00:27:17.706 "process": { 00:27:17.706 "type": "rebuild", 00:27:17.706 "target": "spare", 00:27:17.706 "progress": { 00:27:17.706 "blocks": 3584, 00:27:17.706 "percent": 45 00:27:17.706 } 00:27:17.706 }, 00:27:17.706 "base_bdevs_list": [ 00:27:17.706 { 00:27:17.706 "name": "spare", 00:27:17.706 "uuid": "3574ccb1-3fe9-51e4-a1fa-844163b56ce3", 00:27:17.706 "is_configured": true, 00:27:17.706 "data_offset": 256, 00:27:17.706 "data_size": 7936 00:27:17.706 }, 00:27:17.706 { 00:27:17.706 "name": "BaseBdev2", 00:27:17.706 "uuid": "fc4959f9-54b7-5bb2-814a-9fa9f8712f5e", 00:27:17.706 "is_configured": true, 00:27:17.706 "data_offset": 256, 00:27:17.706 "data_size": 7936 00:27:17.706 } 00:27:17.706 ] 00:27:17.706 }' 00:27:17.706 18:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:17.706 18:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:17.706 18:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:17.706 18:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:17.706 18:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:27:17.706 18:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:27:17.706 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:27:17.706 18:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:27:17.706 18:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:27:17.706 18:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:27:17.706 18:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@705 -- # local timeout=1082 00:27:17.706 18:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:17.706 18:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:17.706 18:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:17.706 18:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:17.706 18:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:17.706 18:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:17.706 18:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:17.706 18:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:18.271 18:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:18.271 "name": "raid_bdev1", 00:27:18.271 "uuid": "739a521d-9a63-4cf8-a75f-3881861786c4", 00:27:18.271 "strip_size_kb": 0, 00:27:18.271 "state": "online", 00:27:18.271 "raid_level": "raid1", 00:27:18.271 "superblock": true, 00:27:18.271 "num_base_bdevs": 2, 00:27:18.271 "num_base_bdevs_discovered": 2, 00:27:18.271 "num_base_bdevs_operational": 2, 00:27:18.271 "process": { 00:27:18.271 "type": "rebuild", 00:27:18.271 "target": "spare", 00:27:18.271 "progress": { 00:27:18.271 "blocks": 5120, 00:27:18.271 "percent": 64 00:27:18.271 } 00:27:18.271 }, 00:27:18.271 "base_bdevs_list": [ 00:27:18.271 { 00:27:18.271 "name": "spare", 00:27:18.271 "uuid": "3574ccb1-3fe9-51e4-a1fa-844163b56ce3", 00:27:18.271 "is_configured": true, 00:27:18.271 "data_offset": 256, 00:27:18.271 "data_size": 7936 00:27:18.271 }, 00:27:18.271 { 00:27:18.271 "name": "BaseBdev2", 00:27:18.271 "uuid": "fc4959f9-54b7-5bb2-814a-9fa9f8712f5e", 00:27:18.271 "is_configured": true, 00:27:18.271 "data_offset": 256, 00:27:18.271 "data_size": 7936 00:27:18.271 } 00:27:18.271 ] 00:27:18.271 }' 00:27:18.271 18:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:18.271 18:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:18.271 18:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:18.528 18:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:18.528 18:42:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:19.463 [2024-07-15 18:42:04.723090] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:19.463 [2024-07-15 18:42:04.723152] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:19.463 [2024-07-15 18:42:04.723233] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:19.463 18:42:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:19.463 18:42:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:19.463 18:42:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:19.463 18:42:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:19.463 18:42:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:19.463 18:42:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:19.463 18:42:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:19.463 18:42:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:20.030 18:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:20.030 "name": "raid_bdev1", 00:27:20.030 "uuid": "739a521d-9a63-4cf8-a75f-3881861786c4", 00:27:20.030 "strip_size_kb": 0, 00:27:20.030 "state": "online", 00:27:20.030 "raid_level": "raid1", 00:27:20.030 "superblock": true, 00:27:20.030 "num_base_bdevs": 2, 00:27:20.030 "num_base_bdevs_discovered": 2, 00:27:20.030 "num_base_bdevs_operational": 2, 00:27:20.030 "base_bdevs_list": [ 00:27:20.030 { 00:27:20.030 "name": "spare", 00:27:20.030 "uuid": "3574ccb1-3fe9-51e4-a1fa-844163b56ce3", 00:27:20.030 "is_configured": true, 00:27:20.030 "data_offset": 256, 00:27:20.030 "data_size": 7936 00:27:20.030 }, 00:27:20.030 { 00:27:20.030 "name": "BaseBdev2", 00:27:20.030 "uuid": "fc4959f9-54b7-5bb2-814a-9fa9f8712f5e", 00:27:20.030 "is_configured": true, 00:27:20.030 "data_offset": 256, 00:27:20.030 "data_size": 7936 00:27:20.030 } 00:27:20.030 ] 00:27:20.030 }' 00:27:20.030 18:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:20.030 18:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:20.030 18:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:20.030 18:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:27:20.030 18:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # break 00:27:20.030 18:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:20.030 18:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:20.030 18:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:20.030 18:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:20.030 18:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:20.030 18:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:20.030 18:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:20.598 18:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:20.598 "name": "raid_bdev1", 00:27:20.598 "uuid": "739a521d-9a63-4cf8-a75f-3881861786c4", 00:27:20.598 "strip_size_kb": 0, 00:27:20.598 "state": "online", 00:27:20.598 "raid_level": "raid1", 00:27:20.598 "superblock": true, 00:27:20.598 "num_base_bdevs": 2, 00:27:20.598 "num_base_bdevs_discovered": 2, 00:27:20.598 "num_base_bdevs_operational": 2, 00:27:20.598 "base_bdevs_list": [ 00:27:20.598 { 00:27:20.598 "name": "spare", 00:27:20.598 "uuid": "3574ccb1-3fe9-51e4-a1fa-844163b56ce3", 00:27:20.598 "is_configured": true, 00:27:20.598 "data_offset": 256, 00:27:20.598 "data_size": 7936 00:27:20.598 }, 00:27:20.598 { 00:27:20.598 "name": "BaseBdev2", 00:27:20.598 "uuid": "fc4959f9-54b7-5bb2-814a-9fa9f8712f5e", 00:27:20.598 "is_configured": true, 00:27:20.598 "data_offset": 256, 00:27:20.598 "data_size": 7936 00:27:20.598 } 00:27:20.598 ] 00:27:20.598 }' 00:27:20.598 18:42:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:20.598 18:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:20.598 18:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:20.598 18:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:20.598 18:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:20.598 18:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:20.598 18:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:20.598 18:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:20.598 18:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:20.598 18:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:20.598 18:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:20.598 18:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:20.598 18:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:20.598 18:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:20.598 18:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:20.598 18:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:20.857 18:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:20.857 "name": "raid_bdev1", 00:27:20.857 "uuid": "739a521d-9a63-4cf8-a75f-3881861786c4", 00:27:20.857 "strip_size_kb": 0, 00:27:20.857 "state": "online", 00:27:20.857 "raid_level": "raid1", 00:27:20.857 "superblock": true, 00:27:20.857 "num_base_bdevs": 2, 00:27:20.857 "num_base_bdevs_discovered": 2, 00:27:20.857 "num_base_bdevs_operational": 2, 00:27:20.857 "base_bdevs_list": [ 00:27:20.857 { 00:27:20.857 "name": "spare", 00:27:20.857 "uuid": "3574ccb1-3fe9-51e4-a1fa-844163b56ce3", 00:27:20.857 "is_configured": true, 00:27:20.857 "data_offset": 256, 00:27:20.857 "data_size": 7936 00:27:20.857 }, 00:27:20.857 { 00:27:20.857 "name": "BaseBdev2", 00:27:20.857 "uuid": "fc4959f9-54b7-5bb2-814a-9fa9f8712f5e", 00:27:20.857 "is_configured": true, 00:27:20.857 "data_offset": 256, 00:27:20.857 "data_size": 7936 00:27:20.857 } 00:27:20.857 ] 00:27:20.857 }' 00:27:20.857 18:42:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:20.857 18:42:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:21.791 18:42:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:22.049 [2024-07-15 18:42:07.434362] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:22.049 [2024-07-15 18:42:07.434389] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:22.049 [2024-07-15 18:42:07.434449] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:22.049 [2024-07-15 18:42:07.434505] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:22.049 [2024-07-15 18:42:07.434514] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x263f370 name raid_bdev1, state offline 00:27:22.049 18:42:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:22.049 18:42:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # jq length 00:27:22.306 18:42:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:27:22.306 18:42:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:27:22.306 18:42:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:27:22.306 18:42:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:22.306 18:42:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:22.306 18:42:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:22.306 18:42:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:22.306 18:42:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:22.306 18:42:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:22.306 18:42:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:27:22.306 18:42:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:22.306 18:42:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:22.306 18:42:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:22.564 /dev/nbd0 00:27:22.564 18:42:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:22.564 18:42:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:22.564 18:42:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:22.564 18:42:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:27:22.564 18:42:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:22.564 18:42:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:22.564 18:42:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:22.564 18:42:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:27:22.564 18:42:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:22.564 18:42:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:22.564 18:42:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:22.564 1+0 records in 00:27:22.564 1+0 records out 00:27:22.564 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215155 s, 19.0 MB/s 00:27:22.564 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:22.564 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:27:22.564 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:22.564 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:22.564 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:27:22.564 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:22.564 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:22.564 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:27:22.821 /dev/nbd1 00:27:22.821 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:22.821 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:22.821 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:27:22.821 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:27:22.821 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:22.821 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:22.821 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:27:22.821 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:27:22.821 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:22.821 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:22.822 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:22.822 1+0 records in 00:27:22.822 1+0 records out 00:27:22.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237902 s, 17.2 MB/s 00:27:22.822 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:22.822 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:27:22.822 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:22.822 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:22.822 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:27:22.822 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:22.822 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:22.822 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:22.822 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:27:22.822 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:22.822 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:22.822 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:22.822 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:27:22.822 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:22.822 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:23.079 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:23.079 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:23.079 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:23.079 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:23.079 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:23.079 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:23.079 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:27:23.079 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:27:23.079 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:23.079 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:23.709 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:23.709 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:23.709 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:23.709 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:23.709 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:23.709 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:23.709 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:27:23.709 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:27:23.709 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:27:23.709 18:42:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:23.709 18:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:23.966 [2024-07-15 18:42:09.347225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:23.966 [2024-07-15 18:42:09.347277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:23.966 [2024-07-15 18:42:09.347297] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x248f2b0 00:27:23.966 [2024-07-15 18:42:09.347307] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:23.966 [2024-07-15 18:42:09.349035] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:23.966 [2024-07-15 18:42:09.349063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:23.966 [2024-07-15 18:42:09.349145] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:23.966 [2024-07-15 18:42:09.349172] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:23.966 [2024-07-15 18:42:09.349279] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:23.966 spare 00:27:23.966 18:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:23.966 18:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:23.966 18:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:23.966 18:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:23.966 18:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:23.966 18:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:23.966 18:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:23.966 18:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:23.966 18:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:23.967 18:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:23.967 18:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:23.967 18:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:23.967 [2024-07-15 18:42:09.449599] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x248e2b0 00:27:23.967 [2024-07-15 18:42:09.449614] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:23.967 [2024-07-15 18:42:09.449826] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2639ef0 00:27:23.967 [2024-07-15 18:42:09.449998] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x248e2b0 00:27:23.967 [2024-07-15 18:42:09.450008] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x248e2b0 00:27:23.967 [2024-07-15 18:42:09.450126] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:24.223 18:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:24.223 "name": "raid_bdev1", 00:27:24.223 "uuid": "739a521d-9a63-4cf8-a75f-3881861786c4", 00:27:24.223 "strip_size_kb": 0, 00:27:24.223 "state": "online", 00:27:24.223 "raid_level": "raid1", 00:27:24.223 "superblock": true, 00:27:24.223 "num_base_bdevs": 2, 00:27:24.223 "num_base_bdevs_discovered": 2, 00:27:24.223 "num_base_bdevs_operational": 2, 00:27:24.223 "base_bdevs_list": [ 00:27:24.223 { 00:27:24.223 "name": "spare", 00:27:24.223 "uuid": "3574ccb1-3fe9-51e4-a1fa-844163b56ce3", 00:27:24.223 "is_configured": true, 00:27:24.223 "data_offset": 256, 00:27:24.223 "data_size": 7936 00:27:24.223 }, 00:27:24.223 { 00:27:24.223 "name": "BaseBdev2", 00:27:24.223 "uuid": "fc4959f9-54b7-5bb2-814a-9fa9f8712f5e", 00:27:24.223 "is_configured": true, 00:27:24.223 "data_offset": 256, 00:27:24.223 "data_size": 7936 00:27:24.223 } 00:27:24.223 ] 00:27:24.223 }' 00:27:24.223 18:42:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:24.224 18:42:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:24.789 18:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:24.789 18:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:24.789 18:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:24.789 18:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:24.789 18:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:24.789 18:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:24.789 18:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:25.047 18:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:25.047 "name": "raid_bdev1", 00:27:25.047 "uuid": "739a521d-9a63-4cf8-a75f-3881861786c4", 00:27:25.047 "strip_size_kb": 0, 00:27:25.047 "state": "online", 00:27:25.047 "raid_level": "raid1", 00:27:25.047 "superblock": true, 00:27:25.047 "num_base_bdevs": 2, 00:27:25.047 "num_base_bdevs_discovered": 2, 00:27:25.047 "num_base_bdevs_operational": 2, 00:27:25.047 "base_bdevs_list": [ 00:27:25.047 { 00:27:25.047 "name": "spare", 00:27:25.047 "uuid": "3574ccb1-3fe9-51e4-a1fa-844163b56ce3", 00:27:25.047 "is_configured": true, 00:27:25.047 "data_offset": 256, 00:27:25.047 "data_size": 7936 00:27:25.047 }, 00:27:25.047 { 00:27:25.047 "name": "BaseBdev2", 00:27:25.047 "uuid": "fc4959f9-54b7-5bb2-814a-9fa9f8712f5e", 00:27:25.047 "is_configured": true, 00:27:25.047 "data_offset": 256, 00:27:25.047 "data_size": 7936 00:27:25.047 } 00:27:25.047 ] 00:27:25.047 }' 00:27:25.047 18:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:25.047 18:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:25.047 18:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:25.306 18:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:25.306 18:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:25.306 18:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:25.564 18:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:27:25.564 18:42:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:25.564 [2024-07-15 18:42:11.108197] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:25.822 18:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:25.822 18:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:25.822 18:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:25.822 18:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:25.822 18:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:25.822 18:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:25.822 18:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:25.822 18:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:25.822 18:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:25.822 18:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:25.822 18:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:25.822 18:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:26.096 18:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:26.096 "name": "raid_bdev1", 00:27:26.096 "uuid": "739a521d-9a63-4cf8-a75f-3881861786c4", 00:27:26.096 "strip_size_kb": 0, 00:27:26.096 "state": "online", 00:27:26.096 "raid_level": "raid1", 00:27:26.096 "superblock": true, 00:27:26.096 "num_base_bdevs": 2, 00:27:26.096 "num_base_bdevs_discovered": 1, 00:27:26.096 "num_base_bdevs_operational": 1, 00:27:26.096 "base_bdevs_list": [ 00:27:26.096 { 00:27:26.096 "name": null, 00:27:26.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:26.096 "is_configured": false, 00:27:26.096 "data_offset": 256, 00:27:26.096 "data_size": 7936 00:27:26.096 }, 00:27:26.096 { 00:27:26.096 "name": "BaseBdev2", 00:27:26.096 "uuid": "fc4959f9-54b7-5bb2-814a-9fa9f8712f5e", 00:27:26.096 "is_configured": true, 00:27:26.096 "data_offset": 256, 00:27:26.096 "data_size": 7936 00:27:26.096 } 00:27:26.096 ] 00:27:26.096 }' 00:27:26.096 18:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:26.096 18:42:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:26.662 18:42:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:26.662 [2024-07-15 18:42:12.155029] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:26.662 [2024-07-15 18:42:12.155184] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:26.662 [2024-07-15 18:42:12.155199] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:26.662 [2024-07-15 18:42:12.155226] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:26.662 [2024-07-15 18:42:12.159942] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2639ef0 00:27:26.662 [2024-07-15 18:42:12.161361] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:26.662 18:42:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # sleep 1 00:27:28.037 18:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:28.037 18:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:28.037 18:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:28.037 18:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:28.037 18:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:28.037 18:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:28.037 18:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:28.037 18:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:28.037 "name": "raid_bdev1", 00:27:28.037 "uuid": "739a521d-9a63-4cf8-a75f-3881861786c4", 00:27:28.037 "strip_size_kb": 0, 00:27:28.037 "state": "online", 00:27:28.037 "raid_level": "raid1", 00:27:28.037 "superblock": true, 00:27:28.037 "num_base_bdevs": 2, 00:27:28.037 "num_base_bdevs_discovered": 2, 00:27:28.037 "num_base_bdevs_operational": 2, 00:27:28.037 "process": { 00:27:28.037 "type": "rebuild", 00:27:28.037 "target": "spare", 00:27:28.037 "progress": { 00:27:28.037 "blocks": 3072, 00:27:28.037 "percent": 38 00:27:28.037 } 00:27:28.037 }, 00:27:28.037 "base_bdevs_list": [ 00:27:28.037 { 00:27:28.037 "name": "spare", 00:27:28.037 "uuid": "3574ccb1-3fe9-51e4-a1fa-844163b56ce3", 00:27:28.037 "is_configured": true, 00:27:28.037 "data_offset": 256, 00:27:28.037 "data_size": 7936 00:27:28.037 }, 00:27:28.037 { 00:27:28.037 "name": "BaseBdev2", 00:27:28.037 "uuid": "fc4959f9-54b7-5bb2-814a-9fa9f8712f5e", 00:27:28.037 "is_configured": true, 00:27:28.037 "data_offset": 256, 00:27:28.037 "data_size": 7936 00:27:28.037 } 00:27:28.037 ] 00:27:28.037 }' 00:27:28.037 18:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:28.037 18:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:28.037 18:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:28.037 18:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:28.037 18:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:28.295 [2024-07-15 18:42:13.780626] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:28.553 [2024-07-15 18:42:13.874172] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:28.553 [2024-07-15 18:42:13.874211] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:28.553 [2024-07-15 18:42:13.874225] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:28.553 [2024-07-15 18:42:13.874231] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:28.553 18:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:28.553 18:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:28.553 18:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:28.553 18:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:28.553 18:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:28.553 18:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:28.553 18:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:28.553 18:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:28.553 18:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:28.553 18:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:28.553 18:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:28.553 18:42:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:28.811 18:42:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:28.811 "name": "raid_bdev1", 00:27:28.811 "uuid": "739a521d-9a63-4cf8-a75f-3881861786c4", 00:27:28.811 "strip_size_kb": 0, 00:27:28.811 "state": "online", 00:27:28.812 "raid_level": "raid1", 00:27:28.812 "superblock": true, 00:27:28.812 "num_base_bdevs": 2, 00:27:28.812 "num_base_bdevs_discovered": 1, 00:27:28.812 "num_base_bdevs_operational": 1, 00:27:28.812 "base_bdevs_list": [ 00:27:28.812 { 00:27:28.812 "name": null, 00:27:28.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:28.812 "is_configured": false, 00:27:28.812 "data_offset": 256, 00:27:28.812 "data_size": 7936 00:27:28.812 }, 00:27:28.812 { 00:27:28.812 "name": "BaseBdev2", 00:27:28.812 "uuid": "fc4959f9-54b7-5bb2-814a-9fa9f8712f5e", 00:27:28.812 "is_configured": true, 00:27:28.812 "data_offset": 256, 00:27:28.812 "data_size": 7936 00:27:28.812 } 00:27:28.812 ] 00:27:28.812 }' 00:27:28.812 18:42:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:28.812 18:42:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:29.377 18:42:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:29.635 [2024-07-15 18:42:15.021636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:29.635 [2024-07-15 18:42:15.021686] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:29.635 [2024-07-15 18:42:15.021707] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x263ef70 00:27:29.635 [2024-07-15 18:42:15.021717] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:29.635 [2024-07-15 18:42:15.022110] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:29.635 [2024-07-15 18:42:15.022126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:29.635 [2024-07-15 18:42:15.022205] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:29.635 [2024-07-15 18:42:15.022216] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:29.635 [2024-07-15 18:42:15.022223] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:29.635 [2024-07-15 18:42:15.022245] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:29.635 [2024-07-15 18:42:15.026931] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2495e00 00:27:29.635 spare 00:27:29.635 [2024-07-15 18:42:15.028346] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:29.635 18:42:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # sleep 1 00:27:30.569 18:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:30.569 18:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:30.569 18:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:30.569 18:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:30.569 18:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:30.569 18:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:30.569 18:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:30.826 18:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:30.827 "name": "raid_bdev1", 00:27:30.827 "uuid": "739a521d-9a63-4cf8-a75f-3881861786c4", 00:27:30.827 "strip_size_kb": 0, 00:27:30.827 "state": "online", 00:27:30.827 "raid_level": "raid1", 00:27:30.827 "superblock": true, 00:27:30.827 "num_base_bdevs": 2, 00:27:30.827 "num_base_bdevs_discovered": 2, 00:27:30.827 "num_base_bdevs_operational": 2, 00:27:30.827 "process": { 00:27:30.827 "type": "rebuild", 00:27:30.827 "target": "spare", 00:27:30.827 "progress": { 00:27:30.827 "blocks": 3072, 00:27:30.827 "percent": 38 00:27:30.827 } 00:27:30.827 }, 00:27:30.827 "base_bdevs_list": [ 00:27:30.827 { 00:27:30.827 "name": "spare", 00:27:30.827 "uuid": "3574ccb1-3fe9-51e4-a1fa-844163b56ce3", 00:27:30.827 "is_configured": true, 00:27:30.827 "data_offset": 256, 00:27:30.827 "data_size": 7936 00:27:30.827 }, 00:27:30.827 { 00:27:30.827 "name": "BaseBdev2", 00:27:30.827 "uuid": "fc4959f9-54b7-5bb2-814a-9fa9f8712f5e", 00:27:30.827 "is_configured": true, 00:27:30.827 "data_offset": 256, 00:27:30.827 "data_size": 7936 00:27:30.827 } 00:27:30.827 ] 00:27:30.827 }' 00:27:30.827 18:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:30.827 18:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:30.827 18:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:31.085 18:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:31.085 18:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:31.344 [2024-07-15 18:42:16.648065] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:31.344 [2024-07-15 18:42:16.741175] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:31.344 [2024-07-15 18:42:16.741216] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:31.344 [2024-07-15 18:42:16.741230] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:31.344 [2024-07-15 18:42:16.741237] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:31.344 18:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:31.344 18:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:31.344 18:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:31.344 18:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:31.344 18:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:31.344 18:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:31.344 18:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:31.344 18:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:31.344 18:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:31.344 18:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:31.344 18:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:31.344 18:42:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:31.603 18:42:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:31.603 "name": "raid_bdev1", 00:27:31.603 "uuid": "739a521d-9a63-4cf8-a75f-3881861786c4", 00:27:31.603 "strip_size_kb": 0, 00:27:31.603 "state": "online", 00:27:31.603 "raid_level": "raid1", 00:27:31.603 "superblock": true, 00:27:31.603 "num_base_bdevs": 2, 00:27:31.603 "num_base_bdevs_discovered": 1, 00:27:31.603 "num_base_bdevs_operational": 1, 00:27:31.603 "base_bdevs_list": [ 00:27:31.603 { 00:27:31.603 "name": null, 00:27:31.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:31.603 "is_configured": false, 00:27:31.603 "data_offset": 256, 00:27:31.603 "data_size": 7936 00:27:31.603 }, 00:27:31.603 { 00:27:31.603 "name": "BaseBdev2", 00:27:31.603 "uuid": "fc4959f9-54b7-5bb2-814a-9fa9f8712f5e", 00:27:31.603 "is_configured": true, 00:27:31.603 "data_offset": 256, 00:27:31.603 "data_size": 7936 00:27:31.603 } 00:27:31.603 ] 00:27:31.603 }' 00:27:31.603 18:42:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:31.603 18:42:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:32.168 18:42:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:32.168 18:42:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:32.168 18:42:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:32.168 18:42:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:32.168 18:42:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:32.168 18:42:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:32.168 18:42:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:32.425 18:42:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:32.425 "name": "raid_bdev1", 00:27:32.425 "uuid": "739a521d-9a63-4cf8-a75f-3881861786c4", 00:27:32.425 "strip_size_kb": 0, 00:27:32.425 "state": "online", 00:27:32.425 "raid_level": "raid1", 00:27:32.425 "superblock": true, 00:27:32.425 "num_base_bdevs": 2, 00:27:32.425 "num_base_bdevs_discovered": 1, 00:27:32.425 "num_base_bdevs_operational": 1, 00:27:32.425 "base_bdevs_list": [ 00:27:32.425 { 00:27:32.425 "name": null, 00:27:32.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:32.425 "is_configured": false, 00:27:32.425 "data_offset": 256, 00:27:32.425 "data_size": 7936 00:27:32.425 }, 00:27:32.425 { 00:27:32.425 "name": "BaseBdev2", 00:27:32.425 "uuid": "fc4959f9-54b7-5bb2-814a-9fa9f8712f5e", 00:27:32.425 "is_configured": true, 00:27:32.425 "data_offset": 256, 00:27:32.425 "data_size": 7936 00:27:32.425 } 00:27:32.425 ] 00:27:32.425 }' 00:27:32.425 18:42:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:32.425 18:42:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:32.425 18:42:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:32.682 18:42:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:32.682 18:42:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:27:32.939 18:42:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:33.197 [2024-07-15 18:42:18.498315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:33.197 [2024-07-15 18:42:18.498362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:33.197 [2024-07-15 18:42:18.498384] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2496360 00:27:33.197 [2024-07-15 18:42:18.498399] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:33.197 [2024-07-15 18:42:18.498754] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:33.198 [2024-07-15 18:42:18.498769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:33.198 [2024-07-15 18:42:18.498832] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:33.198 [2024-07-15 18:42:18.498843] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:33.198 [2024-07-15 18:42:18.498850] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:33.198 BaseBdev1 00:27:33.198 18:42:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # sleep 1 00:27:34.132 18:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:34.132 18:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:34.132 18:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:34.132 18:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:34.132 18:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:34.132 18:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:34.132 18:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:34.132 18:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:34.132 18:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:34.132 18:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:34.132 18:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:34.132 18:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:34.390 18:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:34.390 "name": "raid_bdev1", 00:27:34.390 "uuid": "739a521d-9a63-4cf8-a75f-3881861786c4", 00:27:34.390 "strip_size_kb": 0, 00:27:34.390 "state": "online", 00:27:34.390 "raid_level": "raid1", 00:27:34.390 "superblock": true, 00:27:34.390 "num_base_bdevs": 2, 00:27:34.390 "num_base_bdevs_discovered": 1, 00:27:34.390 "num_base_bdevs_operational": 1, 00:27:34.390 "base_bdevs_list": [ 00:27:34.390 { 00:27:34.390 "name": null, 00:27:34.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:34.390 "is_configured": false, 00:27:34.390 "data_offset": 256, 00:27:34.390 "data_size": 7936 00:27:34.390 }, 00:27:34.390 { 00:27:34.390 "name": "BaseBdev2", 00:27:34.390 "uuid": "fc4959f9-54b7-5bb2-814a-9fa9f8712f5e", 00:27:34.390 "is_configured": true, 00:27:34.390 "data_offset": 256, 00:27:34.390 "data_size": 7936 00:27:34.390 } 00:27:34.390 ] 00:27:34.390 }' 00:27:34.390 18:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:34.390 18:42:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:34.955 18:42:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:34.955 18:42:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:34.955 18:42:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:34.955 18:42:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:34.955 18:42:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:34.955 18:42:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:34.955 18:42:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:35.212 18:42:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:35.212 "name": "raid_bdev1", 00:27:35.212 "uuid": "739a521d-9a63-4cf8-a75f-3881861786c4", 00:27:35.212 "strip_size_kb": 0, 00:27:35.212 "state": "online", 00:27:35.212 "raid_level": "raid1", 00:27:35.212 "superblock": true, 00:27:35.212 "num_base_bdevs": 2, 00:27:35.212 "num_base_bdevs_discovered": 1, 00:27:35.212 "num_base_bdevs_operational": 1, 00:27:35.212 "base_bdevs_list": [ 00:27:35.212 { 00:27:35.212 "name": null, 00:27:35.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:35.212 "is_configured": false, 00:27:35.212 "data_offset": 256, 00:27:35.212 "data_size": 7936 00:27:35.212 }, 00:27:35.212 { 00:27:35.212 "name": "BaseBdev2", 00:27:35.212 "uuid": "fc4959f9-54b7-5bb2-814a-9fa9f8712f5e", 00:27:35.212 "is_configured": true, 00:27:35.212 "data_offset": 256, 00:27:35.212 "data_size": 7936 00:27:35.212 } 00:27:35.212 ] 00:27:35.212 }' 00:27:35.212 18:42:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:35.470 18:42:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:35.470 18:42:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:35.470 18:42:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:35.470 18:42:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:35.470 18:42:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@648 -- # local es=0 00:27:35.470 18:42:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:35.470 18:42:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:35.470 18:42:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:35.470 18:42:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:35.470 18:42:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:35.470 18:42:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:35.470 18:42:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:35.470 18:42:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:35.470 18:42:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:27:35.470 18:42:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:35.727 [2024-07-15 18:42:21.057231] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:35.727 [2024-07-15 18:42:21.057353] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:35.727 [2024-07-15 18:42:21.057367] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:35.727 request: 00:27:35.727 { 00:27:35.727 "base_bdev": "BaseBdev1", 00:27:35.727 "raid_bdev": "raid_bdev1", 00:27:35.727 "method": "bdev_raid_add_base_bdev", 00:27:35.727 "req_id": 1 00:27:35.727 } 00:27:35.727 Got JSON-RPC error response 00:27:35.727 response: 00:27:35.727 { 00:27:35.727 "code": -22, 00:27:35.727 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:27:35.727 } 00:27:35.727 18:42:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # es=1 00:27:35.727 18:42:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:35.727 18:42:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:35.727 18:42:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:35.727 18:42:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # sleep 1 00:27:36.662 18:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:36.662 18:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:36.662 18:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:36.662 18:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:36.662 18:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:36.662 18:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:36.662 18:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:36.662 18:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:36.662 18:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:36.662 18:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:36.662 18:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:36.662 18:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:36.921 18:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:36.921 "name": "raid_bdev1", 00:27:36.921 "uuid": "739a521d-9a63-4cf8-a75f-3881861786c4", 00:27:36.921 "strip_size_kb": 0, 00:27:36.921 "state": "online", 00:27:36.921 "raid_level": "raid1", 00:27:36.921 "superblock": true, 00:27:36.921 "num_base_bdevs": 2, 00:27:36.921 "num_base_bdevs_discovered": 1, 00:27:36.921 "num_base_bdevs_operational": 1, 00:27:36.921 "base_bdevs_list": [ 00:27:36.921 { 00:27:36.921 "name": null, 00:27:36.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:36.921 "is_configured": false, 00:27:36.921 "data_offset": 256, 00:27:36.921 "data_size": 7936 00:27:36.921 }, 00:27:36.921 { 00:27:36.921 "name": "BaseBdev2", 00:27:36.921 "uuid": "fc4959f9-54b7-5bb2-814a-9fa9f8712f5e", 00:27:36.921 "is_configured": true, 00:27:36.921 "data_offset": 256, 00:27:36.921 "data_size": 7936 00:27:36.921 } 00:27:36.921 ] 00:27:36.921 }' 00:27:36.921 18:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:36.921 18:42:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:37.487 18:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:37.487 18:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:37.487 18:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:37.487 18:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:37.487 18:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:37.487 18:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:37.487 18:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:37.745 18:42:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:37.745 "name": "raid_bdev1", 00:27:37.745 "uuid": "739a521d-9a63-4cf8-a75f-3881861786c4", 00:27:37.745 "strip_size_kb": 0, 00:27:37.745 "state": "online", 00:27:37.745 "raid_level": "raid1", 00:27:37.746 "superblock": true, 00:27:37.746 "num_base_bdevs": 2, 00:27:37.746 "num_base_bdevs_discovered": 1, 00:27:37.746 "num_base_bdevs_operational": 1, 00:27:37.746 "base_bdevs_list": [ 00:27:37.746 { 00:27:37.746 "name": null, 00:27:37.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:37.746 "is_configured": false, 00:27:37.746 "data_offset": 256, 00:27:37.746 "data_size": 7936 00:27:37.746 }, 00:27:37.746 { 00:27:37.746 "name": "BaseBdev2", 00:27:37.746 "uuid": "fc4959f9-54b7-5bb2-814a-9fa9f8712f5e", 00:27:37.746 "is_configured": true, 00:27:37.746 "data_offset": 256, 00:27:37.746 "data_size": 7936 00:27:37.746 } 00:27:37.746 ] 00:27:37.746 }' 00:27:37.746 18:42:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:37.746 18:42:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:37.746 18:42:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:37.746 18:42:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:37.746 18:42:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@782 -- # killprocess 2932899 00:27:37.746 18:42:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 2932899 ']' 00:27:37.746 18:42:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 2932899 00:27:37.746 18:42:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:27:37.746 18:42:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:37.746 18:42:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2932899 00:27:37.746 18:42:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:37.746 18:42:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:37.746 18:42:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2932899' 00:27:37.746 killing process with pid 2932899 00:27:37.746 18:42:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@967 -- # kill 2932899 00:27:37.746 Received shutdown signal, test time was about 60.000000 seconds 00:27:37.746 00:27:37.746 Latency(us) 00:27:37.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:37.746 =================================================================================================================== 00:27:37.746 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:37.746 [2024-07-15 18:42:23.283049] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:37.746 [2024-07-15 18:42:23.283137] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:37.746 [2024-07-15 18:42:23.283187] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:37.746 [2024-07-15 18:42:23.283197] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x248e2b0 name raid_bdev1, state offline 00:27:37.746 18:42:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # wait 2932899 00:27:38.046 [2024-07-15 18:42:23.309892] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:38.046 18:42:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # return 0 00:27:38.046 00:27:38.046 real 0m33.595s 00:27:38.046 user 0m55.212s 00:27:38.046 sys 0m4.282s 00:27:38.046 18:42:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:38.046 18:42:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:38.046 ************************************ 00:27:38.046 END TEST raid_rebuild_test_sb_4k 00:27:38.046 ************************************ 00:27:38.046 18:42:23 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:38.046 18:42:23 bdev_raid -- bdev/bdev_raid.sh@904 -- # base_malloc_params='-m 32' 00:27:38.046 18:42:23 bdev_raid -- bdev/bdev_raid.sh@905 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:27:38.046 18:42:23 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:27:38.046 18:42:23 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:38.046 18:42:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:38.046 ************************************ 00:27:38.046 START TEST raid_state_function_test_sb_md_separate 00:27:38.046 ************************************ 00:27:38.046 18:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:27:38.046 18:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:27:38.046 18:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:27:38.046 18:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:27:38.046 18:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:27:38.046 18:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:27:38.047 18:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:38.047 18:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:27:38.047 18:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:38.047 18:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:38.047 18:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:27:38.047 18:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:38.047 18:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:38.047 18:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:38.047 18:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:27:38.047 18:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:27:38.047 18:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:27:38.047 18:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:27:38.047 18:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:27:38.047 18:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:27:38.047 18:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:27:38.047 18:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:27:38.047 18:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:27:38.047 18:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=2938958 00:27:38.047 18:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2938958' 00:27:38.047 Process raid pid: 2938958 00:27:38.047 18:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:27:38.047 18:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 2938958 /var/tmp/spdk-raid.sock 00:27:38.047 18:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 2938958 ']' 00:27:38.047 18:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:38.047 18:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:38.047 18:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:38.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:38.047 18:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:38.047 18:42:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:38.316 [2024-07-15 18:42:23.621056] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:27:38.316 [2024-07-15 18:42:23.621137] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:38.316 [2024-07-15 18:42:23.723127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.316 [2024-07-15 18:42:23.815661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.574 [2024-07-15 18:42:23.874248] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:38.574 [2024-07-15 18:42:23.874279] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:39.141 18:42:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:39.141 18:42:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:27:39.141 18:42:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:27:39.141 [2024-07-15 18:42:24.649017] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:39.141 [2024-07-15 18:42:24.649059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:39.141 [2024-07-15 18:42:24.649069] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:39.141 [2024-07-15 18:42:24.649077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:39.141 18:42:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:39.141 18:42:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:39.141 18:42:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:39.141 18:42:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:39.141 18:42:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:39.141 18:42:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:39.141 18:42:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:39.141 18:42:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:39.141 18:42:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:39.141 18:42:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:39.141 18:42:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:39.141 18:42:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:39.400 18:42:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:39.400 "name": "Existed_Raid", 00:27:39.400 "uuid": "758ea536-5670-4cc6-9baf-c9c015626a7f", 00:27:39.400 "strip_size_kb": 0, 00:27:39.400 "state": "configuring", 00:27:39.400 "raid_level": "raid1", 00:27:39.400 "superblock": true, 00:27:39.400 "num_base_bdevs": 2, 00:27:39.400 "num_base_bdevs_discovered": 0, 00:27:39.400 "num_base_bdevs_operational": 2, 00:27:39.400 "base_bdevs_list": [ 00:27:39.400 { 00:27:39.400 "name": "BaseBdev1", 00:27:39.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:39.400 "is_configured": false, 00:27:39.400 "data_offset": 0, 00:27:39.400 "data_size": 0 00:27:39.400 }, 00:27:39.400 { 00:27:39.400 "name": "BaseBdev2", 00:27:39.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:39.400 "is_configured": false, 00:27:39.400 "data_offset": 0, 00:27:39.400 "data_size": 0 00:27:39.400 } 00:27:39.400 ] 00:27:39.400 }' 00:27:39.400 18:42:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:39.400 18:42:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:40.335 18:42:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:40.335 [2024-07-15 18:42:25.799966] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:40.335 [2024-07-15 18:42:25.799995] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10c9b80 name Existed_Raid, state configuring 00:27:40.335 18:42:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:27:40.594 [2024-07-15 18:42:26.060675] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:40.594 [2024-07-15 18:42:26.060701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:40.594 [2024-07-15 18:42:26.060712] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:40.594 [2024-07-15 18:42:26.060721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:40.594 18:42:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:27:40.852 [2024-07-15 18:42:26.327309] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:40.852 BaseBdev1 00:27:40.852 18:42:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:27:40.852 18:42:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:27:40.852 18:42:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:40.852 18:42:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:27:40.852 18:42:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:40.852 18:42:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:40.852 18:42:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:41.110 18:42:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:41.367 [ 00:27:41.367 { 00:27:41.367 "name": "BaseBdev1", 00:27:41.367 "aliases": [ 00:27:41.367 "8ae63e83-acc2-45b2-9e91-188c2f3460f4" 00:27:41.367 ], 00:27:41.367 "product_name": "Malloc disk", 00:27:41.367 "block_size": 4096, 00:27:41.367 "num_blocks": 8192, 00:27:41.367 "uuid": "8ae63e83-acc2-45b2-9e91-188c2f3460f4", 00:27:41.367 "md_size": 32, 00:27:41.367 "md_interleave": false, 00:27:41.367 "dif_type": 0, 00:27:41.367 "assigned_rate_limits": { 00:27:41.367 "rw_ios_per_sec": 0, 00:27:41.367 "rw_mbytes_per_sec": 0, 00:27:41.367 "r_mbytes_per_sec": 0, 00:27:41.367 "w_mbytes_per_sec": 0 00:27:41.367 }, 00:27:41.367 "claimed": true, 00:27:41.367 "claim_type": "exclusive_write", 00:27:41.367 "zoned": false, 00:27:41.367 "supported_io_types": { 00:27:41.367 "read": true, 00:27:41.367 "write": true, 00:27:41.367 "unmap": true, 00:27:41.367 "flush": true, 00:27:41.367 "reset": true, 00:27:41.367 "nvme_admin": false, 00:27:41.367 "nvme_io": false, 00:27:41.367 "nvme_io_md": false, 00:27:41.367 "write_zeroes": true, 00:27:41.367 "zcopy": true, 00:27:41.367 "get_zone_info": false, 00:27:41.367 "zone_management": false, 00:27:41.367 "zone_append": false, 00:27:41.367 "compare": false, 00:27:41.367 "compare_and_write": false, 00:27:41.367 "abort": true, 00:27:41.367 "seek_hole": false, 00:27:41.367 "seek_data": false, 00:27:41.367 "copy": true, 00:27:41.367 "nvme_iov_md": false 00:27:41.367 }, 00:27:41.367 "memory_domains": [ 00:27:41.367 { 00:27:41.367 "dma_device_id": "system", 00:27:41.367 "dma_device_type": 1 00:27:41.367 }, 00:27:41.367 { 00:27:41.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:41.367 "dma_device_type": 2 00:27:41.367 } 00:27:41.367 ], 00:27:41.367 "driver_specific": {} 00:27:41.367 } 00:27:41.367 ] 00:27:41.367 18:42:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:27:41.367 18:42:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:41.367 18:42:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:41.367 18:42:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:41.367 18:42:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:41.367 18:42:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:41.367 18:42:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:41.367 18:42:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:41.367 18:42:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:41.367 18:42:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:41.368 18:42:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:41.368 18:42:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:41.368 18:42:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:41.625 18:42:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:41.625 "name": "Existed_Raid", 00:27:41.625 "uuid": "59b64bd0-3d11-4751-852b-3a5ef02837c8", 00:27:41.625 "strip_size_kb": 0, 00:27:41.625 "state": "configuring", 00:27:41.625 "raid_level": "raid1", 00:27:41.625 "superblock": true, 00:27:41.625 "num_base_bdevs": 2, 00:27:41.625 "num_base_bdevs_discovered": 1, 00:27:41.625 "num_base_bdevs_operational": 2, 00:27:41.625 "base_bdevs_list": [ 00:27:41.625 { 00:27:41.625 "name": "BaseBdev1", 00:27:41.625 "uuid": "8ae63e83-acc2-45b2-9e91-188c2f3460f4", 00:27:41.625 "is_configured": true, 00:27:41.625 "data_offset": 256, 00:27:41.626 "data_size": 7936 00:27:41.626 }, 00:27:41.626 { 00:27:41.626 "name": "BaseBdev2", 00:27:41.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.626 "is_configured": false, 00:27:41.626 "data_offset": 0, 00:27:41.626 "data_size": 0 00:27:41.626 } 00:27:41.626 ] 00:27:41.626 }' 00:27:41.626 18:42:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:41.626 18:42:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:42.561 18:42:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:42.819 [2024-07-15 18:42:28.216402] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:42.819 [2024-07-15 18:42:28.216444] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10c9470 name Existed_Raid, state configuring 00:27:42.819 18:42:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:27:43.386 [2024-07-15 18:42:28.713763] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:43.386 [2024-07-15 18:42:28.715301] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:43.386 [2024-07-15 18:42:28.715333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:43.386 18:42:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:27:43.386 18:42:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:43.386 18:42:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:43.386 18:42:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:43.386 18:42:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:43.386 18:42:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:43.386 18:42:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:43.386 18:42:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:43.386 18:42:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:43.386 18:42:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:43.386 18:42:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:43.386 18:42:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:43.386 18:42:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:43.386 18:42:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:43.644 18:42:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:43.644 "name": "Existed_Raid", 00:27:43.644 "uuid": "bb709539-d77b-456c-8d6a-9e90caae4a82", 00:27:43.644 "strip_size_kb": 0, 00:27:43.644 "state": "configuring", 00:27:43.644 "raid_level": "raid1", 00:27:43.644 "superblock": true, 00:27:43.644 "num_base_bdevs": 2, 00:27:43.644 "num_base_bdevs_discovered": 1, 00:27:43.644 "num_base_bdevs_operational": 2, 00:27:43.644 "base_bdevs_list": [ 00:27:43.644 { 00:27:43.644 "name": "BaseBdev1", 00:27:43.644 "uuid": "8ae63e83-acc2-45b2-9e91-188c2f3460f4", 00:27:43.644 "is_configured": true, 00:27:43.644 "data_offset": 256, 00:27:43.644 "data_size": 7936 00:27:43.644 }, 00:27:43.644 { 00:27:43.644 "name": "BaseBdev2", 00:27:43.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:43.644 "is_configured": false, 00:27:43.644 "data_offset": 0, 00:27:43.644 "data_size": 0 00:27:43.644 } 00:27:43.644 ] 00:27:43.644 }' 00:27:43.644 18:42:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:43.644 18:42:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:44.210 18:42:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:27:44.468 [2024-07-15 18:42:29.812489] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:44.468 [2024-07-15 18:42:29.812629] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1266cd0 00:27:44.468 [2024-07-15 18:42:29.812641] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:44.468 [2024-07-15 18:42:29.812706] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1266710 00:27:44.468 [2024-07-15 18:42:29.812806] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1266cd0 00:27:44.468 [2024-07-15 18:42:29.812815] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1266cd0 00:27:44.468 [2024-07-15 18:42:29.812881] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:44.468 BaseBdev2 00:27:44.468 18:42:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:27:44.468 18:42:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:27:44.468 18:42:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:44.468 18:42:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:27:44.468 18:42:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:44.468 18:42:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:44.468 18:42:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:44.725 18:42:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:44.983 [ 00:27:44.983 { 00:27:44.983 "name": "BaseBdev2", 00:27:44.983 "aliases": [ 00:27:44.983 "e0831039-bd28-4bab-a9f9-15fc6131d8aa" 00:27:44.983 ], 00:27:44.983 "product_name": "Malloc disk", 00:27:44.983 "block_size": 4096, 00:27:44.983 "num_blocks": 8192, 00:27:44.983 "uuid": "e0831039-bd28-4bab-a9f9-15fc6131d8aa", 00:27:44.983 "md_size": 32, 00:27:44.983 "md_interleave": false, 00:27:44.983 "dif_type": 0, 00:27:44.983 "assigned_rate_limits": { 00:27:44.983 "rw_ios_per_sec": 0, 00:27:44.983 "rw_mbytes_per_sec": 0, 00:27:44.983 "r_mbytes_per_sec": 0, 00:27:44.983 "w_mbytes_per_sec": 0 00:27:44.983 }, 00:27:44.983 "claimed": true, 00:27:44.983 "claim_type": "exclusive_write", 00:27:44.983 "zoned": false, 00:27:44.983 "supported_io_types": { 00:27:44.983 "read": true, 00:27:44.983 "write": true, 00:27:44.983 "unmap": true, 00:27:44.983 "flush": true, 00:27:44.983 "reset": true, 00:27:44.983 "nvme_admin": false, 00:27:44.983 "nvme_io": false, 00:27:44.983 "nvme_io_md": false, 00:27:44.983 "write_zeroes": true, 00:27:44.983 "zcopy": true, 00:27:44.983 "get_zone_info": false, 00:27:44.983 "zone_management": false, 00:27:44.983 "zone_append": false, 00:27:44.983 "compare": false, 00:27:44.983 "compare_and_write": false, 00:27:44.983 "abort": true, 00:27:44.983 "seek_hole": false, 00:27:44.983 "seek_data": false, 00:27:44.983 "copy": true, 00:27:44.983 "nvme_iov_md": false 00:27:44.983 }, 00:27:44.983 "memory_domains": [ 00:27:44.983 { 00:27:44.983 "dma_device_id": "system", 00:27:44.983 "dma_device_type": 1 00:27:44.983 }, 00:27:44.983 { 00:27:44.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:44.983 "dma_device_type": 2 00:27:44.983 } 00:27:44.983 ], 00:27:44.983 "driver_specific": {} 00:27:44.983 } 00:27:44.983 ] 00:27:44.983 18:42:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:27:44.983 18:42:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:27:44.983 18:42:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:44.983 18:42:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:27:44.983 18:42:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:44.983 18:42:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:44.983 18:42:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:44.983 18:42:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:44.983 18:42:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:44.983 18:42:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:44.983 18:42:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:44.983 18:42:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:44.983 18:42:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:44.983 18:42:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:44.983 18:42:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:45.240 18:42:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:45.240 "name": "Existed_Raid", 00:27:45.240 "uuid": "bb709539-d77b-456c-8d6a-9e90caae4a82", 00:27:45.240 "strip_size_kb": 0, 00:27:45.240 "state": "online", 00:27:45.240 "raid_level": "raid1", 00:27:45.240 "superblock": true, 00:27:45.240 "num_base_bdevs": 2, 00:27:45.240 "num_base_bdevs_discovered": 2, 00:27:45.240 "num_base_bdevs_operational": 2, 00:27:45.240 "base_bdevs_list": [ 00:27:45.240 { 00:27:45.240 "name": "BaseBdev1", 00:27:45.240 "uuid": "8ae63e83-acc2-45b2-9e91-188c2f3460f4", 00:27:45.240 "is_configured": true, 00:27:45.240 "data_offset": 256, 00:27:45.240 "data_size": 7936 00:27:45.240 }, 00:27:45.240 { 00:27:45.240 "name": "BaseBdev2", 00:27:45.240 "uuid": "e0831039-bd28-4bab-a9f9-15fc6131d8aa", 00:27:45.240 "is_configured": true, 00:27:45.240 "data_offset": 256, 00:27:45.240 "data_size": 7936 00:27:45.240 } 00:27:45.240 ] 00:27:45.240 }' 00:27:45.240 18:42:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:45.240 18:42:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:45.806 18:42:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:27:45.806 18:42:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:27:45.806 18:42:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:45.806 18:42:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:45.806 18:42:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:45.806 18:42:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:27:45.806 18:42:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:27:45.806 18:42:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:46.064 [2024-07-15 18:42:31.477302] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:46.064 18:42:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:46.064 "name": "Existed_Raid", 00:27:46.064 "aliases": [ 00:27:46.064 "bb709539-d77b-456c-8d6a-9e90caae4a82" 00:27:46.064 ], 00:27:46.064 "product_name": "Raid Volume", 00:27:46.065 "block_size": 4096, 00:27:46.065 "num_blocks": 7936, 00:27:46.065 "uuid": "bb709539-d77b-456c-8d6a-9e90caae4a82", 00:27:46.065 "md_size": 32, 00:27:46.065 "md_interleave": false, 00:27:46.065 "dif_type": 0, 00:27:46.065 "assigned_rate_limits": { 00:27:46.065 "rw_ios_per_sec": 0, 00:27:46.065 "rw_mbytes_per_sec": 0, 00:27:46.065 "r_mbytes_per_sec": 0, 00:27:46.065 "w_mbytes_per_sec": 0 00:27:46.065 }, 00:27:46.065 "claimed": false, 00:27:46.065 "zoned": false, 00:27:46.065 "supported_io_types": { 00:27:46.065 "read": true, 00:27:46.065 "write": true, 00:27:46.065 "unmap": false, 00:27:46.065 "flush": false, 00:27:46.065 "reset": true, 00:27:46.065 "nvme_admin": false, 00:27:46.065 "nvme_io": false, 00:27:46.065 "nvme_io_md": false, 00:27:46.065 "write_zeroes": true, 00:27:46.065 "zcopy": false, 00:27:46.065 "get_zone_info": false, 00:27:46.065 "zone_management": false, 00:27:46.065 "zone_append": false, 00:27:46.065 "compare": false, 00:27:46.065 "compare_and_write": false, 00:27:46.065 "abort": false, 00:27:46.065 "seek_hole": false, 00:27:46.065 "seek_data": false, 00:27:46.065 "copy": false, 00:27:46.065 "nvme_iov_md": false 00:27:46.065 }, 00:27:46.065 "memory_domains": [ 00:27:46.065 { 00:27:46.065 "dma_device_id": "system", 00:27:46.065 "dma_device_type": 1 00:27:46.065 }, 00:27:46.065 { 00:27:46.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:46.065 "dma_device_type": 2 00:27:46.065 }, 00:27:46.065 { 00:27:46.065 "dma_device_id": "system", 00:27:46.065 "dma_device_type": 1 00:27:46.065 }, 00:27:46.065 { 00:27:46.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:46.065 "dma_device_type": 2 00:27:46.065 } 00:27:46.065 ], 00:27:46.065 "driver_specific": { 00:27:46.065 "raid": { 00:27:46.065 "uuid": "bb709539-d77b-456c-8d6a-9e90caae4a82", 00:27:46.065 "strip_size_kb": 0, 00:27:46.065 "state": "online", 00:27:46.065 "raid_level": "raid1", 00:27:46.065 "superblock": true, 00:27:46.065 "num_base_bdevs": 2, 00:27:46.065 "num_base_bdevs_discovered": 2, 00:27:46.065 "num_base_bdevs_operational": 2, 00:27:46.065 "base_bdevs_list": [ 00:27:46.065 { 00:27:46.065 "name": "BaseBdev1", 00:27:46.065 "uuid": "8ae63e83-acc2-45b2-9e91-188c2f3460f4", 00:27:46.065 "is_configured": true, 00:27:46.065 "data_offset": 256, 00:27:46.065 "data_size": 7936 00:27:46.065 }, 00:27:46.065 { 00:27:46.065 "name": "BaseBdev2", 00:27:46.065 "uuid": "e0831039-bd28-4bab-a9f9-15fc6131d8aa", 00:27:46.065 "is_configured": true, 00:27:46.065 "data_offset": 256, 00:27:46.065 "data_size": 7936 00:27:46.065 } 00:27:46.065 ] 00:27:46.065 } 00:27:46.065 } 00:27:46.065 }' 00:27:46.065 18:42:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:46.065 18:42:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:27:46.065 BaseBdev2' 00:27:46.065 18:42:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:46.065 18:42:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:27:46.065 18:42:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:46.323 18:42:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:46.323 "name": "BaseBdev1", 00:27:46.323 "aliases": [ 00:27:46.323 "8ae63e83-acc2-45b2-9e91-188c2f3460f4" 00:27:46.323 ], 00:27:46.323 "product_name": "Malloc disk", 00:27:46.323 "block_size": 4096, 00:27:46.323 "num_blocks": 8192, 00:27:46.323 "uuid": "8ae63e83-acc2-45b2-9e91-188c2f3460f4", 00:27:46.323 "md_size": 32, 00:27:46.323 "md_interleave": false, 00:27:46.323 "dif_type": 0, 00:27:46.323 "assigned_rate_limits": { 00:27:46.323 "rw_ios_per_sec": 0, 00:27:46.323 "rw_mbytes_per_sec": 0, 00:27:46.323 "r_mbytes_per_sec": 0, 00:27:46.323 "w_mbytes_per_sec": 0 00:27:46.323 }, 00:27:46.323 "claimed": true, 00:27:46.323 "claim_type": "exclusive_write", 00:27:46.323 "zoned": false, 00:27:46.323 "supported_io_types": { 00:27:46.323 "read": true, 00:27:46.323 "write": true, 00:27:46.323 "unmap": true, 00:27:46.323 "flush": true, 00:27:46.323 "reset": true, 00:27:46.323 "nvme_admin": false, 00:27:46.323 "nvme_io": false, 00:27:46.323 "nvme_io_md": false, 00:27:46.323 "write_zeroes": true, 00:27:46.323 "zcopy": true, 00:27:46.323 "get_zone_info": false, 00:27:46.323 "zone_management": false, 00:27:46.323 "zone_append": false, 00:27:46.323 "compare": false, 00:27:46.323 "compare_and_write": false, 00:27:46.323 "abort": true, 00:27:46.323 "seek_hole": false, 00:27:46.323 "seek_data": false, 00:27:46.323 "copy": true, 00:27:46.323 "nvme_iov_md": false 00:27:46.323 }, 00:27:46.323 "memory_domains": [ 00:27:46.323 { 00:27:46.323 "dma_device_id": "system", 00:27:46.323 "dma_device_type": 1 00:27:46.323 }, 00:27:46.323 { 00:27:46.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:46.323 "dma_device_type": 2 00:27:46.323 } 00:27:46.323 ], 00:27:46.323 "driver_specific": {} 00:27:46.323 }' 00:27:46.323 18:42:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:46.323 18:42:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:46.581 18:42:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:46.581 18:42:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:46.581 18:42:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:46.581 18:42:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:46.581 18:42:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:46.582 18:42:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:46.582 18:42:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:27:46.582 18:42:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:46.840 18:42:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:46.840 18:42:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:46.840 18:42:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:46.840 18:42:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:27:46.840 18:42:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:47.098 18:42:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:47.098 "name": "BaseBdev2", 00:27:47.098 "aliases": [ 00:27:47.098 "e0831039-bd28-4bab-a9f9-15fc6131d8aa" 00:27:47.098 ], 00:27:47.098 "product_name": "Malloc disk", 00:27:47.098 "block_size": 4096, 00:27:47.098 "num_blocks": 8192, 00:27:47.098 "uuid": "e0831039-bd28-4bab-a9f9-15fc6131d8aa", 00:27:47.098 "md_size": 32, 00:27:47.098 "md_interleave": false, 00:27:47.098 "dif_type": 0, 00:27:47.098 "assigned_rate_limits": { 00:27:47.098 "rw_ios_per_sec": 0, 00:27:47.098 "rw_mbytes_per_sec": 0, 00:27:47.098 "r_mbytes_per_sec": 0, 00:27:47.098 "w_mbytes_per_sec": 0 00:27:47.098 }, 00:27:47.098 "claimed": true, 00:27:47.098 "claim_type": "exclusive_write", 00:27:47.098 "zoned": false, 00:27:47.098 "supported_io_types": { 00:27:47.098 "read": true, 00:27:47.098 "write": true, 00:27:47.098 "unmap": true, 00:27:47.098 "flush": true, 00:27:47.098 "reset": true, 00:27:47.098 "nvme_admin": false, 00:27:47.098 "nvme_io": false, 00:27:47.098 "nvme_io_md": false, 00:27:47.098 "write_zeroes": true, 00:27:47.098 "zcopy": true, 00:27:47.098 "get_zone_info": false, 00:27:47.098 "zone_management": false, 00:27:47.098 "zone_append": false, 00:27:47.098 "compare": false, 00:27:47.098 "compare_and_write": false, 00:27:47.098 "abort": true, 00:27:47.098 "seek_hole": false, 00:27:47.098 "seek_data": false, 00:27:47.098 "copy": true, 00:27:47.098 "nvme_iov_md": false 00:27:47.098 }, 00:27:47.098 "memory_domains": [ 00:27:47.098 { 00:27:47.098 "dma_device_id": "system", 00:27:47.098 "dma_device_type": 1 00:27:47.098 }, 00:27:47.098 { 00:27:47.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:47.098 "dma_device_type": 2 00:27:47.098 } 00:27:47.098 ], 00:27:47.098 "driver_specific": {} 00:27:47.098 }' 00:27:47.098 18:42:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:47.098 18:42:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:47.098 18:42:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:47.098 18:42:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:47.098 18:42:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:47.098 18:42:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:47.098 18:42:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:47.356 18:42:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:47.356 18:42:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:27:47.356 18:42:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:47.356 18:42:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:47.356 18:42:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:47.356 18:42:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:47.614 [2024-07-15 18:42:33.041252] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:47.614 18:42:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:27:47.614 18:42:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:27:47.614 18:42:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:47.614 18:42:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:27:47.614 18:42:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:27:47.614 18:42:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:27:47.614 18:42:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:47.614 18:42:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:47.614 18:42:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:47.614 18:42:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:47.614 18:42:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:47.614 18:42:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:47.614 18:42:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:47.614 18:42:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:47.614 18:42:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:47.614 18:42:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:47.614 18:42:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:47.871 18:42:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:47.871 "name": "Existed_Raid", 00:27:47.871 "uuid": "bb709539-d77b-456c-8d6a-9e90caae4a82", 00:27:47.871 "strip_size_kb": 0, 00:27:47.871 "state": "online", 00:27:47.871 "raid_level": "raid1", 00:27:47.871 "superblock": true, 00:27:47.871 "num_base_bdevs": 2, 00:27:47.871 "num_base_bdevs_discovered": 1, 00:27:47.871 "num_base_bdevs_operational": 1, 00:27:47.871 "base_bdevs_list": [ 00:27:47.871 { 00:27:47.871 "name": null, 00:27:47.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:47.871 "is_configured": false, 00:27:47.871 "data_offset": 256, 00:27:47.871 "data_size": 7936 00:27:47.871 }, 00:27:47.871 { 00:27:47.871 "name": "BaseBdev2", 00:27:47.871 "uuid": "e0831039-bd28-4bab-a9f9-15fc6131d8aa", 00:27:47.871 "is_configured": true, 00:27:47.871 "data_offset": 256, 00:27:47.871 "data_size": 7936 00:27:47.871 } 00:27:47.871 ] 00:27:47.871 }' 00:27:47.871 18:42:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:47.871 18:42:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:48.438 18:42:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:27:48.438 18:42:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:48.438 18:42:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:48.438 18:42:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:27:48.697 18:42:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:27:48.697 18:42:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:48.697 18:42:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:27:48.955 [2024-07-15 18:42:34.451857] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:48.955 [2024-07-15 18:42:34.451944] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:48.955 [2024-07-15 18:42:34.463559] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:48.955 [2024-07-15 18:42:34.463594] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:48.955 [2024-07-15 18:42:34.463603] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1266cd0 name Existed_Raid, state offline 00:27:48.955 18:42:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:27:48.955 18:42:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:48.955 18:42:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:48.955 18:42:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:27:49.213 18:42:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:27:49.213 18:42:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:27:49.213 18:42:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:27:49.213 18:42:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 2938958 00:27:49.213 18:42:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 2938958 ']' 00:27:49.213 18:42:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 2938958 00:27:49.213 18:42:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:27:49.213 18:42:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:49.213 18:42:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2938958 00:27:49.472 18:42:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:49.472 18:42:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:49.472 18:42:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2938958' 00:27:49.472 killing process with pid 2938958 00:27:49.472 18:42:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 2938958 00:27:49.472 [2024-07-15 18:42:34.789742] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:49.472 18:42:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 2938958 00:27:49.472 [2024-07-15 18:42:34.790610] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:49.472 18:42:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:27:49.472 00:27:49.472 real 0m11.434s 00:27:49.472 user 0m20.827s 00:27:49.472 sys 0m1.680s 00:27:49.472 18:42:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:49.472 18:42:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:49.472 ************************************ 00:27:49.472 END TEST raid_state_function_test_sb_md_separate 00:27:49.472 ************************************ 00:27:49.748 18:42:35 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:49.748 18:42:35 bdev_raid -- bdev/bdev_raid.sh@906 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:27:49.748 18:42:35 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:27:49.748 18:42:35 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:49.748 18:42:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:49.748 ************************************ 00:27:49.748 START TEST raid_superblock_test_md_separate 00:27:49.748 ************************************ 00:27:49.748 18:42:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:27:49.748 18:42:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:27:49.748 18:42:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:27:49.748 18:42:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:27:49.748 18:42:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:27:49.748 18:42:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:27:49.748 18:42:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:27:49.748 18:42:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:27:49.748 18:42:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:27:49.748 18:42:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:27:49.748 18:42:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local strip_size 00:27:49.748 18:42:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:27:49.748 18:42:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:27:49.748 18:42:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:27:49.748 18:42:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:27:49.748 18:42:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:27:49.748 18:42:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # raid_pid=2940888 00:27:49.748 18:42:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # waitforlisten 2940888 /var/tmp/spdk-raid.sock 00:27:49.748 18:42:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:27:49.748 18:42:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@829 -- # '[' -z 2940888 ']' 00:27:49.748 18:42:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:49.748 18:42:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:49.748 18:42:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:49.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:49.748 18:42:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:49.748 18:42:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:49.748 [2024-07-15 18:42:35.093806] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:27:49.748 [2024-07-15 18:42:35.093866] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2940888 ] 00:27:49.748 [2024-07-15 18:42:35.192558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.748 [2024-07-15 18:42:35.287809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.015 [2024-07-15 18:42:35.346206] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:50.015 [2024-07-15 18:42:35.346239] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:50.581 18:42:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:50.581 18:42:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # return 0 00:27:50.581 18:42:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:27:50.581 18:42:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:50.581 18:42:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:27:50.581 18:42:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:27:50.581 18:42:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:50.581 18:42:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:50.581 18:42:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:27:50.581 18:42:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:50.581 18:42:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:27:50.840 malloc1 00:27:50.840 18:42:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:51.098 [2024-07-15 18:42:36.540392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:51.098 [2024-07-15 18:42:36.540435] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:51.098 [2024-07-15 18:42:36.540452] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf377c0 00:27:51.098 [2024-07-15 18:42:36.540462] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:51.098 [2024-07-15 18:42:36.542015] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:51.098 [2024-07-15 18:42:36.542040] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:51.098 pt1 00:27:51.098 18:42:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:27:51.098 18:42:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:51.098 18:42:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:27:51.098 18:42:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:27:51.098 18:42:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:51.098 18:42:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:51.098 18:42:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:27:51.098 18:42:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:51.098 18:42:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:27:51.356 malloc2 00:27:51.356 18:42:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:51.614 [2024-07-15 18:42:37.059103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:51.614 [2024-07-15 18:42:37.059147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:51.614 [2024-07-15 18:42:37.059166] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10c3ef0 00:27:51.614 [2024-07-15 18:42:37.059176] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:51.614 [2024-07-15 18:42:37.060741] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:51.614 [2024-07-15 18:42:37.060767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:51.614 pt2 00:27:51.614 18:42:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:27:51.614 18:42:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:51.614 18:42:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:27:51.873 [2024-07-15 18:42:37.315802] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:51.873 [2024-07-15 18:42:37.317171] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:51.873 [2024-07-15 18:42:37.317318] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x10b9420 00:27:51.873 [2024-07-15 18:42:37.317330] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:51.873 [2024-07-15 18:42:37.317396] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10ba4a0 00:27:51.873 [2024-07-15 18:42:37.317511] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10b9420 00:27:51.873 [2024-07-15 18:42:37.317520] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x10b9420 00:27:51.873 [2024-07-15 18:42:37.317588] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:51.873 18:42:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:51.873 18:42:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:51.873 18:42:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:51.873 18:42:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:51.873 18:42:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:51.873 18:42:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:51.873 18:42:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:51.873 18:42:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:51.873 18:42:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:51.873 18:42:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:51.873 18:42:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:51.873 18:42:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:52.132 18:42:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:52.132 "name": "raid_bdev1", 00:27:52.132 "uuid": "f86e0a53-0245-4ab3-b0c3-f8c977274164", 00:27:52.132 "strip_size_kb": 0, 00:27:52.132 "state": "online", 00:27:52.132 "raid_level": "raid1", 00:27:52.132 "superblock": true, 00:27:52.132 "num_base_bdevs": 2, 00:27:52.132 "num_base_bdevs_discovered": 2, 00:27:52.132 "num_base_bdevs_operational": 2, 00:27:52.132 "base_bdevs_list": [ 00:27:52.132 { 00:27:52.132 "name": "pt1", 00:27:52.132 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:52.132 "is_configured": true, 00:27:52.132 "data_offset": 256, 00:27:52.132 "data_size": 7936 00:27:52.132 }, 00:27:52.132 { 00:27:52.132 "name": "pt2", 00:27:52.132 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:52.132 "is_configured": true, 00:27:52.132 "data_offset": 256, 00:27:52.132 "data_size": 7936 00:27:52.132 } 00:27:52.132 ] 00:27:52.132 }' 00:27:52.132 18:42:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:52.132 18:42:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:52.731 18:42:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:27:52.731 18:42:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:27:52.731 18:42:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:52.731 18:42:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:52.731 18:42:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:52.731 18:42:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:27:52.731 18:42:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:52.731 18:42:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:52.990 [2024-07-15 18:42:38.463151] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:52.990 18:42:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:52.990 "name": "raid_bdev1", 00:27:52.990 "aliases": [ 00:27:52.990 "f86e0a53-0245-4ab3-b0c3-f8c977274164" 00:27:52.990 ], 00:27:52.990 "product_name": "Raid Volume", 00:27:52.990 "block_size": 4096, 00:27:52.990 "num_blocks": 7936, 00:27:52.990 "uuid": "f86e0a53-0245-4ab3-b0c3-f8c977274164", 00:27:52.990 "md_size": 32, 00:27:52.990 "md_interleave": false, 00:27:52.990 "dif_type": 0, 00:27:52.990 "assigned_rate_limits": { 00:27:52.990 "rw_ios_per_sec": 0, 00:27:52.990 "rw_mbytes_per_sec": 0, 00:27:52.990 "r_mbytes_per_sec": 0, 00:27:52.990 "w_mbytes_per_sec": 0 00:27:52.990 }, 00:27:52.990 "claimed": false, 00:27:52.990 "zoned": false, 00:27:52.990 "supported_io_types": { 00:27:52.990 "read": true, 00:27:52.990 "write": true, 00:27:52.990 "unmap": false, 00:27:52.990 "flush": false, 00:27:52.990 "reset": true, 00:27:52.990 "nvme_admin": false, 00:27:52.990 "nvme_io": false, 00:27:52.990 "nvme_io_md": false, 00:27:52.990 "write_zeroes": true, 00:27:52.990 "zcopy": false, 00:27:52.990 "get_zone_info": false, 00:27:52.990 "zone_management": false, 00:27:52.990 "zone_append": false, 00:27:52.990 "compare": false, 00:27:52.990 "compare_and_write": false, 00:27:52.990 "abort": false, 00:27:52.990 "seek_hole": false, 00:27:52.990 "seek_data": false, 00:27:52.990 "copy": false, 00:27:52.990 "nvme_iov_md": false 00:27:52.990 }, 00:27:52.990 "memory_domains": [ 00:27:52.990 { 00:27:52.990 "dma_device_id": "system", 00:27:52.990 "dma_device_type": 1 00:27:52.990 }, 00:27:52.990 { 00:27:52.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:52.990 "dma_device_type": 2 00:27:52.990 }, 00:27:52.990 { 00:27:52.990 "dma_device_id": "system", 00:27:52.990 "dma_device_type": 1 00:27:52.990 }, 00:27:52.990 { 00:27:52.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:52.990 "dma_device_type": 2 00:27:52.990 } 00:27:52.990 ], 00:27:52.990 "driver_specific": { 00:27:52.990 "raid": { 00:27:52.990 "uuid": "f86e0a53-0245-4ab3-b0c3-f8c977274164", 00:27:52.990 "strip_size_kb": 0, 00:27:52.990 "state": "online", 00:27:52.990 "raid_level": "raid1", 00:27:52.990 "superblock": true, 00:27:52.990 "num_base_bdevs": 2, 00:27:52.990 "num_base_bdevs_discovered": 2, 00:27:52.990 "num_base_bdevs_operational": 2, 00:27:52.990 "base_bdevs_list": [ 00:27:52.990 { 00:27:52.990 "name": "pt1", 00:27:52.990 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:52.990 "is_configured": true, 00:27:52.990 "data_offset": 256, 00:27:52.990 "data_size": 7936 00:27:52.990 }, 00:27:52.990 { 00:27:52.990 "name": "pt2", 00:27:52.990 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:52.990 "is_configured": true, 00:27:52.990 "data_offset": 256, 00:27:52.990 "data_size": 7936 00:27:52.990 } 00:27:52.990 ] 00:27:52.990 } 00:27:52.990 } 00:27:52.990 }' 00:27:52.990 18:42:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:52.990 18:42:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:27:52.990 pt2' 00:27:52.990 18:42:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:52.990 18:42:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:52.990 18:42:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:53.249 18:42:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:53.249 "name": "pt1", 00:27:53.249 "aliases": [ 00:27:53.249 "00000000-0000-0000-0000-000000000001" 00:27:53.249 ], 00:27:53.249 "product_name": "passthru", 00:27:53.249 "block_size": 4096, 00:27:53.249 "num_blocks": 8192, 00:27:53.249 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:53.249 "md_size": 32, 00:27:53.249 "md_interleave": false, 00:27:53.249 "dif_type": 0, 00:27:53.249 "assigned_rate_limits": { 00:27:53.249 "rw_ios_per_sec": 0, 00:27:53.249 "rw_mbytes_per_sec": 0, 00:27:53.249 "r_mbytes_per_sec": 0, 00:27:53.249 "w_mbytes_per_sec": 0 00:27:53.249 }, 00:27:53.249 "claimed": true, 00:27:53.249 "claim_type": "exclusive_write", 00:27:53.249 "zoned": false, 00:27:53.249 "supported_io_types": { 00:27:53.249 "read": true, 00:27:53.249 "write": true, 00:27:53.249 "unmap": true, 00:27:53.249 "flush": true, 00:27:53.249 "reset": true, 00:27:53.249 "nvme_admin": false, 00:27:53.249 "nvme_io": false, 00:27:53.249 "nvme_io_md": false, 00:27:53.249 "write_zeroes": true, 00:27:53.249 "zcopy": true, 00:27:53.249 "get_zone_info": false, 00:27:53.249 "zone_management": false, 00:27:53.249 "zone_append": false, 00:27:53.249 "compare": false, 00:27:53.249 "compare_and_write": false, 00:27:53.249 "abort": true, 00:27:53.249 "seek_hole": false, 00:27:53.249 "seek_data": false, 00:27:53.249 "copy": true, 00:27:53.249 "nvme_iov_md": false 00:27:53.249 }, 00:27:53.249 "memory_domains": [ 00:27:53.249 { 00:27:53.249 "dma_device_id": "system", 00:27:53.249 "dma_device_type": 1 00:27:53.249 }, 00:27:53.249 { 00:27:53.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:53.249 "dma_device_type": 2 00:27:53.249 } 00:27:53.249 ], 00:27:53.249 "driver_specific": { 00:27:53.249 "passthru": { 00:27:53.249 "name": "pt1", 00:27:53.249 "base_bdev_name": "malloc1" 00:27:53.249 } 00:27:53.249 } 00:27:53.249 }' 00:27:53.249 18:42:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:53.506 18:42:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:53.506 18:42:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:53.506 18:42:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:53.506 18:42:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:53.506 18:42:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:53.506 18:42:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:53.506 18:42:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:53.764 18:42:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:27:53.764 18:42:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:53.764 18:42:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:53.764 18:42:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:53.764 18:42:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:53.764 18:42:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:53.764 18:42:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:54.021 18:42:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:54.021 "name": "pt2", 00:27:54.021 "aliases": [ 00:27:54.021 "00000000-0000-0000-0000-000000000002" 00:27:54.021 ], 00:27:54.021 "product_name": "passthru", 00:27:54.021 "block_size": 4096, 00:27:54.021 "num_blocks": 8192, 00:27:54.021 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:54.021 "md_size": 32, 00:27:54.021 "md_interleave": false, 00:27:54.021 "dif_type": 0, 00:27:54.021 "assigned_rate_limits": { 00:27:54.021 "rw_ios_per_sec": 0, 00:27:54.021 "rw_mbytes_per_sec": 0, 00:27:54.021 "r_mbytes_per_sec": 0, 00:27:54.021 "w_mbytes_per_sec": 0 00:27:54.021 }, 00:27:54.021 "claimed": true, 00:27:54.021 "claim_type": "exclusive_write", 00:27:54.021 "zoned": false, 00:27:54.021 "supported_io_types": { 00:27:54.021 "read": true, 00:27:54.021 "write": true, 00:27:54.021 "unmap": true, 00:27:54.021 "flush": true, 00:27:54.021 "reset": true, 00:27:54.021 "nvme_admin": false, 00:27:54.021 "nvme_io": false, 00:27:54.021 "nvme_io_md": false, 00:27:54.021 "write_zeroes": true, 00:27:54.021 "zcopy": true, 00:27:54.021 "get_zone_info": false, 00:27:54.021 "zone_management": false, 00:27:54.021 "zone_append": false, 00:27:54.021 "compare": false, 00:27:54.021 "compare_and_write": false, 00:27:54.021 "abort": true, 00:27:54.021 "seek_hole": false, 00:27:54.021 "seek_data": false, 00:27:54.021 "copy": true, 00:27:54.021 "nvme_iov_md": false 00:27:54.021 }, 00:27:54.021 "memory_domains": [ 00:27:54.021 { 00:27:54.021 "dma_device_id": "system", 00:27:54.021 "dma_device_type": 1 00:27:54.021 }, 00:27:54.021 { 00:27:54.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:54.021 "dma_device_type": 2 00:27:54.022 } 00:27:54.022 ], 00:27:54.022 "driver_specific": { 00:27:54.022 "passthru": { 00:27:54.022 "name": "pt2", 00:27:54.022 "base_bdev_name": "malloc2" 00:27:54.022 } 00:27:54.022 } 00:27:54.022 }' 00:27:54.022 18:42:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:54.022 18:42:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:54.022 18:42:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:54.022 18:42:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:54.279 18:42:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:54.279 18:42:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:54.279 18:42:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:54.279 18:42:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:54.279 18:42:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:27:54.279 18:42:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:54.279 18:42:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:54.279 18:42:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:54.279 18:42:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:54.279 18:42:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:27:54.537 [2024-07-15 18:42:40.039420] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:54.537 18:42:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=f86e0a53-0245-4ab3-b0c3-f8c977274164 00:27:54.537 18:42:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # '[' -z f86e0a53-0245-4ab3-b0c3-f8c977274164 ']' 00:27:54.537 18:42:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:54.794 [2024-07-15 18:42:40.299826] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:54.794 [2024-07-15 18:42:40.299849] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:54.794 [2024-07-15 18:42:40.299903] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:54.794 [2024-07-15 18:42:40.299960] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:54.794 [2024-07-15 18:42:40.299970] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10b9420 name raid_bdev1, state offline 00:27:54.794 18:42:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:54.794 18:42:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:27:55.052 18:42:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:27:55.052 18:42:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:27:55.052 18:42:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:55.052 18:42:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:55.310 18:42:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:55.311 18:42:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:55.568 18:42:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:27:55.568 18:42:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:55.826 18:42:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:27:55.826 18:42:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:55.826 18:42:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:27:55.826 18:42:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:55.826 18:42:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:55.826 18:42:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:55.826 18:42:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:55.826 18:42:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:55.826 18:42:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:55.826 18:42:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:55.826 18:42:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:55.826 18:42:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:27:55.826 18:42:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:56.084 [2024-07-15 18:42:41.567165] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:56.084 [2024-07-15 18:42:41.568572] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:56.084 [2024-07-15 18:42:41.568623] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:56.084 [2024-07-15 18:42:41.568659] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:56.084 [2024-07-15 18:42:41.568674] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:56.084 [2024-07-15 18:42:41.568682] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10b96a0 name raid_bdev1, state configuring 00:27:56.084 request: 00:27:56.084 { 00:27:56.084 "name": "raid_bdev1", 00:27:56.084 "raid_level": "raid1", 00:27:56.084 "base_bdevs": [ 00:27:56.084 "malloc1", 00:27:56.085 "malloc2" 00:27:56.085 ], 00:27:56.085 "superblock": false, 00:27:56.085 "method": "bdev_raid_create", 00:27:56.085 "req_id": 1 00:27:56.085 } 00:27:56.085 Got JSON-RPC error response 00:27:56.085 response: 00:27:56.085 { 00:27:56.085 "code": -17, 00:27:56.085 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:56.085 } 00:27:56.085 18:42:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # es=1 00:27:56.085 18:42:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:56.085 18:42:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:56.085 18:42:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:56.085 18:42:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:56.085 18:42:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:27:56.343 18:42:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:27:56.343 18:42:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:27:56.343 18:42:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:56.601 [2024-07-15 18:42:42.084489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:56.601 [2024-07-15 18:42:42.084525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:56.601 [2024-07-15 18:42:42.084538] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf379f0 00:27:56.601 [2024-07-15 18:42:42.084547] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:56.601 [2024-07-15 18:42:42.086032] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:56.601 [2024-07-15 18:42:42.086056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:56.601 [2024-07-15 18:42:42.086097] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:56.601 [2024-07-15 18:42:42.086120] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:56.601 pt1 00:27:56.601 18:42:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:27:56.601 18:42:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:56.601 18:42:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:56.601 18:42:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:56.601 18:42:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:56.601 18:42:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:56.601 18:42:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:56.601 18:42:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:56.601 18:42:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:56.601 18:42:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:56.601 18:42:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:56.601 18:42:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:56.859 18:42:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:56.859 "name": "raid_bdev1", 00:27:56.859 "uuid": "f86e0a53-0245-4ab3-b0c3-f8c977274164", 00:27:56.859 "strip_size_kb": 0, 00:27:56.859 "state": "configuring", 00:27:56.859 "raid_level": "raid1", 00:27:56.859 "superblock": true, 00:27:56.859 "num_base_bdevs": 2, 00:27:56.859 "num_base_bdevs_discovered": 1, 00:27:56.859 "num_base_bdevs_operational": 2, 00:27:56.859 "base_bdevs_list": [ 00:27:56.859 { 00:27:56.859 "name": "pt1", 00:27:56.859 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:56.859 "is_configured": true, 00:27:56.859 "data_offset": 256, 00:27:56.859 "data_size": 7936 00:27:56.859 }, 00:27:56.859 { 00:27:56.859 "name": null, 00:27:56.859 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:56.859 "is_configured": false, 00:27:56.859 "data_offset": 256, 00:27:56.859 "data_size": 7936 00:27:56.859 } 00:27:56.859 ] 00:27:56.859 }' 00:27:56.859 18:42:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:56.859 18:42:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:57.425 18:42:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:27:57.425 18:42:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:27:57.425 18:42:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:57.425 18:42:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:57.682 [2024-07-15 18:42:43.199493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:57.683 [2024-07-15 18:42:43.199546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:57.683 [2024-07-15 18:42:43.199564] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10baeb0 00:27:57.683 [2024-07-15 18:42:43.199573] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:57.683 [2024-07-15 18:42:43.199749] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:57.683 [2024-07-15 18:42:43.199764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:57.683 [2024-07-15 18:42:43.199802] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:57.683 [2024-07-15 18:42:43.199819] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:57.683 [2024-07-15 18:42:43.199908] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xf36280 00:27:57.683 [2024-07-15 18:42:43.199917] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:57.683 [2024-07-15 18:42:43.199981] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10bc4a0 00:27:57.683 [2024-07-15 18:42:43.200083] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf36280 00:27:57.683 [2024-07-15 18:42:43.200091] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xf36280 00:27:57.683 [2024-07-15 18:42:43.200162] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:57.683 pt2 00:27:57.683 18:42:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:27:57.683 18:42:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:57.683 18:42:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:57.683 18:42:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:57.683 18:42:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:57.683 18:42:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:57.683 18:42:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:57.683 18:42:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:57.683 18:42:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:57.683 18:42:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:57.683 18:42:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:57.683 18:42:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:57.683 18:42:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:57.683 18:42:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:57.940 18:42:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:57.940 "name": "raid_bdev1", 00:27:57.940 "uuid": "f86e0a53-0245-4ab3-b0c3-f8c977274164", 00:27:57.940 "strip_size_kb": 0, 00:27:57.940 "state": "online", 00:27:57.940 "raid_level": "raid1", 00:27:57.940 "superblock": true, 00:27:57.940 "num_base_bdevs": 2, 00:27:57.940 "num_base_bdevs_discovered": 2, 00:27:57.940 "num_base_bdevs_operational": 2, 00:27:57.940 "base_bdevs_list": [ 00:27:57.940 { 00:27:57.940 "name": "pt1", 00:27:57.940 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:57.940 "is_configured": true, 00:27:57.940 "data_offset": 256, 00:27:57.940 "data_size": 7936 00:27:57.940 }, 00:27:57.940 { 00:27:57.940 "name": "pt2", 00:27:57.940 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:57.940 "is_configured": true, 00:27:57.940 "data_offset": 256, 00:27:57.940 "data_size": 7936 00:27:57.940 } 00:27:57.940 ] 00:27:57.940 }' 00:27:57.940 18:42:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:57.940 18:42:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:58.873 18:42:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:27:58.873 18:42:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:27:58.873 18:42:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:58.873 18:42:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:58.873 18:42:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:58.873 18:42:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:27:58.873 18:42:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:58.873 18:42:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:58.873 [2024-07-15 18:42:44.334862] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:58.873 18:42:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:58.873 "name": "raid_bdev1", 00:27:58.873 "aliases": [ 00:27:58.873 "f86e0a53-0245-4ab3-b0c3-f8c977274164" 00:27:58.873 ], 00:27:58.873 "product_name": "Raid Volume", 00:27:58.873 "block_size": 4096, 00:27:58.873 "num_blocks": 7936, 00:27:58.873 "uuid": "f86e0a53-0245-4ab3-b0c3-f8c977274164", 00:27:58.873 "md_size": 32, 00:27:58.873 "md_interleave": false, 00:27:58.873 "dif_type": 0, 00:27:58.873 "assigned_rate_limits": { 00:27:58.873 "rw_ios_per_sec": 0, 00:27:58.873 "rw_mbytes_per_sec": 0, 00:27:58.873 "r_mbytes_per_sec": 0, 00:27:58.873 "w_mbytes_per_sec": 0 00:27:58.873 }, 00:27:58.873 "claimed": false, 00:27:58.873 "zoned": false, 00:27:58.873 "supported_io_types": { 00:27:58.873 "read": true, 00:27:58.873 "write": true, 00:27:58.873 "unmap": false, 00:27:58.873 "flush": false, 00:27:58.873 "reset": true, 00:27:58.873 "nvme_admin": false, 00:27:58.873 "nvme_io": false, 00:27:58.873 "nvme_io_md": false, 00:27:58.873 "write_zeroes": true, 00:27:58.873 "zcopy": false, 00:27:58.873 "get_zone_info": false, 00:27:58.873 "zone_management": false, 00:27:58.873 "zone_append": false, 00:27:58.873 "compare": false, 00:27:58.873 "compare_and_write": false, 00:27:58.873 "abort": false, 00:27:58.873 "seek_hole": false, 00:27:58.873 "seek_data": false, 00:27:58.874 "copy": false, 00:27:58.874 "nvme_iov_md": false 00:27:58.874 }, 00:27:58.874 "memory_domains": [ 00:27:58.874 { 00:27:58.874 "dma_device_id": "system", 00:27:58.874 "dma_device_type": 1 00:27:58.874 }, 00:27:58.874 { 00:27:58.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:58.874 "dma_device_type": 2 00:27:58.874 }, 00:27:58.874 { 00:27:58.874 "dma_device_id": "system", 00:27:58.874 "dma_device_type": 1 00:27:58.874 }, 00:27:58.874 { 00:27:58.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:58.874 "dma_device_type": 2 00:27:58.874 } 00:27:58.874 ], 00:27:58.874 "driver_specific": { 00:27:58.874 "raid": { 00:27:58.874 "uuid": "f86e0a53-0245-4ab3-b0c3-f8c977274164", 00:27:58.874 "strip_size_kb": 0, 00:27:58.874 "state": "online", 00:27:58.874 "raid_level": "raid1", 00:27:58.874 "superblock": true, 00:27:58.874 "num_base_bdevs": 2, 00:27:58.874 "num_base_bdevs_discovered": 2, 00:27:58.874 "num_base_bdevs_operational": 2, 00:27:58.874 "base_bdevs_list": [ 00:27:58.874 { 00:27:58.874 "name": "pt1", 00:27:58.874 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:58.874 "is_configured": true, 00:27:58.874 "data_offset": 256, 00:27:58.874 "data_size": 7936 00:27:58.874 }, 00:27:58.874 { 00:27:58.874 "name": "pt2", 00:27:58.874 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:58.874 "is_configured": true, 00:27:58.874 "data_offset": 256, 00:27:58.874 "data_size": 7936 00:27:58.874 } 00:27:58.874 ] 00:27:58.874 } 00:27:58.874 } 00:27:58.874 }' 00:27:58.874 18:42:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:58.874 18:42:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:27:58.874 pt2' 00:27:58.874 18:42:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:58.874 18:42:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:58.874 18:42:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:59.132 18:42:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:59.132 "name": "pt1", 00:27:59.132 "aliases": [ 00:27:59.132 "00000000-0000-0000-0000-000000000001" 00:27:59.132 ], 00:27:59.132 "product_name": "passthru", 00:27:59.132 "block_size": 4096, 00:27:59.132 "num_blocks": 8192, 00:27:59.132 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:59.132 "md_size": 32, 00:27:59.132 "md_interleave": false, 00:27:59.132 "dif_type": 0, 00:27:59.132 "assigned_rate_limits": { 00:27:59.132 "rw_ios_per_sec": 0, 00:27:59.132 "rw_mbytes_per_sec": 0, 00:27:59.132 "r_mbytes_per_sec": 0, 00:27:59.132 "w_mbytes_per_sec": 0 00:27:59.132 }, 00:27:59.132 "claimed": true, 00:27:59.132 "claim_type": "exclusive_write", 00:27:59.132 "zoned": false, 00:27:59.132 "supported_io_types": { 00:27:59.132 "read": true, 00:27:59.132 "write": true, 00:27:59.132 "unmap": true, 00:27:59.132 "flush": true, 00:27:59.132 "reset": true, 00:27:59.132 "nvme_admin": false, 00:27:59.132 "nvme_io": false, 00:27:59.132 "nvme_io_md": false, 00:27:59.132 "write_zeroes": true, 00:27:59.132 "zcopy": true, 00:27:59.132 "get_zone_info": false, 00:27:59.132 "zone_management": false, 00:27:59.132 "zone_append": false, 00:27:59.132 "compare": false, 00:27:59.132 "compare_and_write": false, 00:27:59.132 "abort": true, 00:27:59.132 "seek_hole": false, 00:27:59.132 "seek_data": false, 00:27:59.132 "copy": true, 00:27:59.132 "nvme_iov_md": false 00:27:59.132 }, 00:27:59.132 "memory_domains": [ 00:27:59.132 { 00:27:59.132 "dma_device_id": "system", 00:27:59.132 "dma_device_type": 1 00:27:59.132 }, 00:27:59.132 { 00:27:59.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:59.132 "dma_device_type": 2 00:27:59.132 } 00:27:59.132 ], 00:27:59.132 "driver_specific": { 00:27:59.132 "passthru": { 00:27:59.132 "name": "pt1", 00:27:59.132 "base_bdev_name": "malloc1" 00:27:59.132 } 00:27:59.132 } 00:27:59.132 }' 00:27:59.132 18:42:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:59.390 18:42:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:59.390 18:42:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:59.390 18:42:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:59.390 18:42:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:59.390 18:42:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:59.390 18:42:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:59.390 18:42:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:59.390 18:42:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:27:59.390 18:42:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:59.647 18:42:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:59.647 18:42:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:59.647 18:42:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:59.647 18:42:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:59.647 18:42:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:59.905 18:42:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:59.905 "name": "pt2", 00:27:59.905 "aliases": [ 00:27:59.905 "00000000-0000-0000-0000-000000000002" 00:27:59.905 ], 00:27:59.905 "product_name": "passthru", 00:27:59.905 "block_size": 4096, 00:27:59.905 "num_blocks": 8192, 00:27:59.905 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:59.905 "md_size": 32, 00:27:59.905 "md_interleave": false, 00:27:59.905 "dif_type": 0, 00:27:59.905 "assigned_rate_limits": { 00:27:59.905 "rw_ios_per_sec": 0, 00:27:59.905 "rw_mbytes_per_sec": 0, 00:27:59.905 "r_mbytes_per_sec": 0, 00:27:59.905 "w_mbytes_per_sec": 0 00:27:59.905 }, 00:27:59.905 "claimed": true, 00:27:59.905 "claim_type": "exclusive_write", 00:27:59.905 "zoned": false, 00:27:59.905 "supported_io_types": { 00:27:59.905 "read": true, 00:27:59.905 "write": true, 00:27:59.906 "unmap": true, 00:27:59.906 "flush": true, 00:27:59.906 "reset": true, 00:27:59.906 "nvme_admin": false, 00:27:59.906 "nvme_io": false, 00:27:59.906 "nvme_io_md": false, 00:27:59.906 "write_zeroes": true, 00:27:59.906 "zcopy": true, 00:27:59.906 "get_zone_info": false, 00:27:59.906 "zone_management": false, 00:27:59.906 "zone_append": false, 00:27:59.906 "compare": false, 00:27:59.906 "compare_and_write": false, 00:27:59.906 "abort": true, 00:27:59.906 "seek_hole": false, 00:27:59.906 "seek_data": false, 00:27:59.906 "copy": true, 00:27:59.906 "nvme_iov_md": false 00:27:59.906 }, 00:27:59.906 "memory_domains": [ 00:27:59.906 { 00:27:59.906 "dma_device_id": "system", 00:27:59.906 "dma_device_type": 1 00:27:59.906 }, 00:27:59.906 { 00:27:59.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:59.906 "dma_device_type": 2 00:27:59.906 } 00:27:59.906 ], 00:27:59.906 "driver_specific": { 00:27:59.906 "passthru": { 00:27:59.906 "name": "pt2", 00:27:59.906 "base_bdev_name": "malloc2" 00:27:59.906 } 00:27:59.906 } 00:27:59.906 }' 00:27:59.906 18:42:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:59.906 18:42:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:59.906 18:42:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:59.906 18:42:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:59.906 18:42:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:59.906 18:42:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:59.906 18:42:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:00.163 18:42:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:00.163 18:42:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:28:00.163 18:42:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:00.163 18:42:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:00.163 18:42:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:00.163 18:42:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:28:00.163 18:42:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:00.421 [2024-07-15 18:42:45.875023] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:00.421 18:42:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # '[' f86e0a53-0245-4ab3-b0c3-f8c977274164 '!=' f86e0a53-0245-4ab3-b0c3-f8c977274164 ']' 00:28:00.421 18:42:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:28:00.421 18:42:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:28:00.421 18:42:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:28:00.421 18:42:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:28:00.679 [2024-07-15 18:42:46.131461] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:28:00.679 18:42:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:00.679 18:42:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:00.679 18:42:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:00.679 18:42:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:00.679 18:42:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:00.679 18:42:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:00.679 18:42:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:00.679 18:42:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:00.679 18:42:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:00.679 18:42:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:00.679 18:42:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:00.679 18:42:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:00.938 18:42:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:00.938 "name": "raid_bdev1", 00:28:00.938 "uuid": "f86e0a53-0245-4ab3-b0c3-f8c977274164", 00:28:00.938 "strip_size_kb": 0, 00:28:00.938 "state": "online", 00:28:00.938 "raid_level": "raid1", 00:28:00.938 "superblock": true, 00:28:00.938 "num_base_bdevs": 2, 00:28:00.938 "num_base_bdevs_discovered": 1, 00:28:00.938 "num_base_bdevs_operational": 1, 00:28:00.938 "base_bdevs_list": [ 00:28:00.938 { 00:28:00.938 "name": null, 00:28:00.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:00.938 "is_configured": false, 00:28:00.938 "data_offset": 256, 00:28:00.938 "data_size": 7936 00:28:00.938 }, 00:28:00.938 { 00:28:00.938 "name": "pt2", 00:28:00.938 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:00.938 "is_configured": true, 00:28:00.938 "data_offset": 256, 00:28:00.938 "data_size": 7936 00:28:00.938 } 00:28:00.938 ] 00:28:00.938 }' 00:28:00.938 18:42:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:00.938 18:42:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:01.505 18:42:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:01.763 [2024-07-15 18:42:47.274511] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:01.763 [2024-07-15 18:42:47.274537] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:01.763 [2024-07-15 18:42:47.274585] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:01.763 [2024-07-15 18:42:47.274626] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:01.763 [2024-07-15 18:42:47.274635] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf36280 name raid_bdev1, state offline 00:28:01.763 18:42:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:01.763 18:42:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:28:02.021 18:42:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:28:02.021 18:42:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:28:02.021 18:42:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:28:02.021 18:42:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:28:02.021 18:42:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:02.278 18:42:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:28:02.278 18:42:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:28:02.278 18:42:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:28:02.278 18:42:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:28:02.278 18:42:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@518 -- # i=1 00:28:02.278 18:42:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:02.536 [2024-07-15 18:42:48.056565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:02.536 [2024-07-15 18:42:48.056601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:02.536 [2024-07-15 18:42:48.056614] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf35760 00:28:02.536 [2024-07-15 18:42:48.056623] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:02.536 [2024-07-15 18:42:48.058122] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:02.536 [2024-07-15 18:42:48.058145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:02.536 [2024-07-15 18:42:48.058185] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:02.536 [2024-07-15 18:42:48.058208] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:02.536 [2024-07-15 18:42:48.058281] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x10bb8f0 00:28:02.536 [2024-07-15 18:42:48.058290] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:28:02.536 [2024-07-15 18:42:48.058344] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10baa60 00:28:02.536 [2024-07-15 18:42:48.058448] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10bb8f0 00:28:02.536 [2024-07-15 18:42:48.058457] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x10bb8f0 00:28:02.536 [2024-07-15 18:42:48.058522] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:02.536 pt2 00:28:02.536 18:42:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:02.536 18:42:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:02.536 18:42:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:02.536 18:42:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:02.536 18:42:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:02.536 18:42:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:02.536 18:42:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:02.536 18:42:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:02.536 18:42:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:02.536 18:42:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:02.536 18:42:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:02.536 18:42:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:03.101 18:42:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:03.101 "name": "raid_bdev1", 00:28:03.101 "uuid": "f86e0a53-0245-4ab3-b0c3-f8c977274164", 00:28:03.101 "strip_size_kb": 0, 00:28:03.101 "state": "online", 00:28:03.101 "raid_level": "raid1", 00:28:03.101 "superblock": true, 00:28:03.101 "num_base_bdevs": 2, 00:28:03.101 "num_base_bdevs_discovered": 1, 00:28:03.101 "num_base_bdevs_operational": 1, 00:28:03.101 "base_bdevs_list": [ 00:28:03.101 { 00:28:03.101 "name": null, 00:28:03.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.101 "is_configured": false, 00:28:03.101 "data_offset": 256, 00:28:03.101 "data_size": 7936 00:28:03.101 }, 00:28:03.101 { 00:28:03.101 "name": "pt2", 00:28:03.101 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:03.101 "is_configured": true, 00:28:03.101 "data_offset": 256, 00:28:03.101 "data_size": 7936 00:28:03.101 } 00:28:03.101 ] 00:28:03.101 }' 00:28:03.101 18:42:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:03.101 18:42:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:03.666 18:42:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:03.666 [2024-07-15 18:42:49.207668] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:03.666 [2024-07-15 18:42:49.207694] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:03.666 [2024-07-15 18:42:49.207742] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:03.666 [2024-07-15 18:42:49.207783] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:03.666 [2024-07-15 18:42:49.207792] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10bb8f0 name raid_bdev1, state offline 00:28:03.924 18:42:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:03.924 18:42:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:28:04.182 18:42:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:28:04.182 18:42:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:28:04.182 18:42:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:28:04.182 18:42:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:04.182 [2024-07-15 18:42:49.725042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:04.182 [2024-07-15 18:42:49.725088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:04.182 [2024-07-15 18:42:49.725103] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf37ec0 00:28:04.182 [2024-07-15 18:42:49.725113] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:04.182 [2024-07-15 18:42:49.726607] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:04.182 [2024-07-15 18:42:49.726630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:04.182 [2024-07-15 18:42:49.726670] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:28:04.182 [2024-07-15 18:42:49.726691] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:04.182 [2024-07-15 18:42:49.726781] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:28:04.182 [2024-07-15 18:42:49.726792] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:04.182 [2024-07-15 18:42:49.726804] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10bc630 name raid_bdev1, state configuring 00:28:04.182 [2024-07-15 18:42:49.726824] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:04.182 [2024-07-15 18:42:49.726877] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x10bc630 00:28:04.182 [2024-07-15 18:42:49.726886] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:28:04.182 [2024-07-15 18:42:49.726938] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10bd4f0 00:28:04.182 [2024-07-15 18:42:49.727048] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10bc630 00:28:04.182 [2024-07-15 18:42:49.727057] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x10bc630 00:28:04.182 [2024-07-15 18:42:49.727129] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:04.182 pt1 00:28:04.440 18:42:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:28:04.440 18:42:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:04.440 18:42:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:04.440 18:42:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:04.440 18:42:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:04.440 18:42:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:04.440 18:42:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:04.440 18:42:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:04.440 18:42:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:04.440 18:42:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:04.440 18:42:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:04.440 18:42:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:04.440 18:42:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:04.698 18:42:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:04.698 "name": "raid_bdev1", 00:28:04.698 "uuid": "f86e0a53-0245-4ab3-b0c3-f8c977274164", 00:28:04.698 "strip_size_kb": 0, 00:28:04.698 "state": "online", 00:28:04.698 "raid_level": "raid1", 00:28:04.698 "superblock": true, 00:28:04.698 "num_base_bdevs": 2, 00:28:04.698 "num_base_bdevs_discovered": 1, 00:28:04.698 "num_base_bdevs_operational": 1, 00:28:04.698 "base_bdevs_list": [ 00:28:04.698 { 00:28:04.698 "name": null, 00:28:04.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:04.698 "is_configured": false, 00:28:04.698 "data_offset": 256, 00:28:04.698 "data_size": 7936 00:28:04.698 }, 00:28:04.698 { 00:28:04.698 "name": "pt2", 00:28:04.698 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:04.698 "is_configured": true, 00:28:04.698 "data_offset": 256, 00:28:04.698 "data_size": 7936 00:28:04.698 } 00:28:04.698 ] 00:28:04.698 }' 00:28:04.698 18:42:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:04.698 18:42:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:05.630 18:42:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:28:05.630 18:42:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:28:05.630 18:42:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:28:05.630 18:42:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:05.630 18:42:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:28:05.889 [2024-07-15 18:42:51.373836] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:05.889 18:42:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' f86e0a53-0245-4ab3-b0c3-f8c977274164 '!=' f86e0a53-0245-4ab3-b0c3-f8c977274164 ']' 00:28:05.889 18:42:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@562 -- # killprocess 2940888 00:28:05.889 18:42:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@948 -- # '[' -z 2940888 ']' 00:28:05.889 18:42:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # kill -0 2940888 00:28:05.889 18:42:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # uname 00:28:05.889 18:42:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:05.889 18:42:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2940888 00:28:05.889 18:42:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:05.889 18:42:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:05.889 18:42:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2940888' 00:28:05.889 killing process with pid 2940888 00:28:05.889 18:42:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@967 -- # kill 2940888 00:28:05.889 [2024-07-15 18:42:51.437412] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:05.889 [2024-07-15 18:42:51.437468] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:05.889 [2024-07-15 18:42:51.437510] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:05.889 [2024-07-15 18:42:51.437519] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10bc630 name raid_bdev1, state offline 00:28:05.889 18:42:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # wait 2940888 00:28:06.147 [2024-07-15 18:42:51.459294] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:06.148 18:42:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@564 -- # return 0 00:28:06.148 00:28:06.148 real 0m16.617s 00:28:06.148 user 0m30.948s 00:28:06.148 sys 0m2.329s 00:28:06.148 18:42:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:06.148 18:42:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:06.148 ************************************ 00:28:06.148 END TEST raid_superblock_test_md_separate 00:28:06.148 ************************************ 00:28:06.148 18:42:51 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:28:06.148 18:42:51 bdev_raid -- bdev/bdev_raid.sh@907 -- # '[' true = true ']' 00:28:06.148 18:42:51 bdev_raid -- bdev/bdev_raid.sh@908 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:28:06.148 18:42:51 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:28:06.148 18:42:51 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:06.148 18:42:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:06.148 ************************************ 00:28:06.148 START TEST raid_rebuild_test_sb_md_separate 00:28:06.148 ************************************ 00:28:06.148 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:28:06.148 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:28:06.148 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:28:06.148 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:28:06.148 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:28:06.148 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local verify=true 00:28:06.435 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:28:06.435 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:06.435 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:28:06.435 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:06.435 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:06.435 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:28:06.435 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:06.435 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:06.435 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:06.435 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:28:06.435 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:28:06.435 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local strip_size 00:28:06.435 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local create_arg 00:28:06.435 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:28:06.435 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local data_offset 00:28:06.435 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:28:06.435 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:28:06.435 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:28:06.435 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:28:06.435 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # raid_pid=2943757 00:28:06.435 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # waitforlisten 2943757 /var/tmp/spdk-raid.sock 00:28:06.435 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:06.435 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 2943757 ']' 00:28:06.436 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:06.436 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:06.436 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:06.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:06.436 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:06.436 18:42:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:06.436 [2024-07-15 18:42:51.760727] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:28:06.436 [2024-07-15 18:42:51.760786] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2943757 ] 00:28:06.436 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:06.436 Zero copy mechanism will not be used. 00:28:06.436 [2024-07-15 18:42:51.858047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.436 [2024-07-15 18:42:51.955750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.695 [2024-07-15 18:42:52.014712] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:06.695 [2024-07-15 18:42:52.014739] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:07.630 18:42:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:07.630 18:42:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:28:07.630 18:42:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:07.630 18:42:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:28:07.888 BaseBdev1_malloc 00:28:07.888 18:42:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:08.145 [2024-07-15 18:42:53.685075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:08.145 [2024-07-15 18:42:53.685122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:08.145 [2024-07-15 18:42:53.685142] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d30ac0 00:28:08.145 [2024-07-15 18:42:53.685151] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:08.145 [2024-07-15 18:42:53.686679] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:08.145 [2024-07-15 18:42:53.686705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:08.145 BaseBdev1 00:28:08.404 18:42:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:08.404 18:42:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:28:08.662 BaseBdev2_malloc 00:28:08.662 18:42:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:08.920 [2024-07-15 18:42:54.215911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:08.920 [2024-07-15 18:42:54.215962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:08.920 [2024-07-15 18:42:54.215981] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ec28f0 00:28:08.920 [2024-07-15 18:42:54.215991] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:08.920 [2024-07-15 18:42:54.217564] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:08.920 [2024-07-15 18:42:54.217590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:08.920 BaseBdev2 00:28:08.920 18:42:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:28:09.179 spare_malloc 00:28:09.437 18:42:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:09.695 spare_delay 00:28:09.695 18:42:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:09.955 [2024-07-15 18:42:55.460472] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:09.955 [2024-07-15 18:42:55.460516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:09.955 [2024-07-15 18:42:55.460536] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d28940 00:28:09.955 [2024-07-15 18:42:55.460545] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:09.955 [2024-07-15 18:42:55.462035] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:09.955 [2024-07-15 18:42:55.462060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:09.955 spare 00:28:09.955 18:42:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:28:10.522 [2024-07-15 18:42:55.941779] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:10.522 [2024-07-15 18:42:55.943157] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:10.522 [2024-07-15 18:42:55.943320] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d2b250 00:28:10.522 [2024-07-15 18:42:55.943332] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:28:10.522 [2024-07-15 18:42:55.943401] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d2a770 00:28:10.522 [2024-07-15 18:42:55.943516] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d2b250 00:28:10.522 [2024-07-15 18:42:55.943525] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1d2b250 00:28:10.522 [2024-07-15 18:42:55.943596] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:10.522 18:42:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:10.522 18:42:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:10.522 18:42:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:10.522 18:42:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:10.522 18:42:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:10.522 18:42:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:10.522 18:42:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:10.522 18:42:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:10.522 18:42:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:10.522 18:42:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:10.522 18:42:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:10.522 18:42:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:10.780 18:42:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:10.780 "name": "raid_bdev1", 00:28:10.780 "uuid": "7e0978b0-865b-4ff2-900e-5e5d003caffc", 00:28:10.780 "strip_size_kb": 0, 00:28:10.780 "state": "online", 00:28:10.780 "raid_level": "raid1", 00:28:10.780 "superblock": true, 00:28:10.780 "num_base_bdevs": 2, 00:28:10.780 "num_base_bdevs_discovered": 2, 00:28:10.780 "num_base_bdevs_operational": 2, 00:28:10.780 "base_bdevs_list": [ 00:28:10.780 { 00:28:10.780 "name": "BaseBdev1", 00:28:10.780 "uuid": "7f0dccc9-1fd9-5ea0-b3f5-f262a82f34d6", 00:28:10.780 "is_configured": true, 00:28:10.780 "data_offset": 256, 00:28:10.780 "data_size": 7936 00:28:10.780 }, 00:28:10.780 { 00:28:10.780 "name": "BaseBdev2", 00:28:10.780 "uuid": "f81d62d8-f992-579c-b3fa-df93a1761a97", 00:28:10.780 "is_configured": true, 00:28:10.780 "data_offset": 256, 00:28:10.780 "data_size": 7936 00:28:10.780 } 00:28:10.780 ] 00:28:10.780 }' 00:28:10.780 18:42:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:10.780 18:42:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:11.345 18:42:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:28:11.345 18:42:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:11.603 [2024-07-15 18:42:57.021112] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:11.603 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:28:11.603 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:11.603 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:11.861 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:28:11.861 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:28:11.861 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:28:11.861 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:28:11.861 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:28:11.861 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:11.861 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:28:11.861 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:11.861 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:11.861 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:11.861 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:28:11.861 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:11.861 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:11.861 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:28:12.120 [2024-07-15 18:42:57.542278] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d2a770 00:28:12.120 /dev/nbd0 00:28:12.120 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:12.120 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:12.120 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:28:12.120 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:28:12.120 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:12.120 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:12.120 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:28:12.120 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:28:12.120 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:12.120 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:12.120 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:12.120 1+0 records in 00:28:12.120 1+0 records out 00:28:12.120 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224889 s, 18.2 MB/s 00:28:12.120 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:12.120 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:28:12.120 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:12.120 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:12.120 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:28:12.120 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:12.120 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:12.120 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:28:12.120 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:28:12.120 18:42:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:28:13.054 7936+0 records in 00:28:13.054 7936+0 records out 00:28:13.054 32505856 bytes (33 MB, 31 MiB) copied, 0.731643 s, 44.4 MB/s 00:28:13.054 18:42:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:28:13.054 18:42:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:13.054 18:42:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:13.054 18:42:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:13.054 18:42:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:28:13.054 18:42:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:13.054 18:42:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:13.311 [2024-07-15 18:42:58.614529] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:13.311 18:42:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:13.311 18:42:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:13.311 18:42:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:13.311 18:42:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:13.311 18:42:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:13.311 18:42:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:13.311 18:42:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:28:13.311 18:42:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:28:13.311 18:42:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:13.311 [2024-07-15 18:42:58.862412] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:13.568 18:42:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:13.568 18:42:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:13.568 18:42:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:13.568 18:42:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:13.568 18:42:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:13.568 18:42:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:13.568 18:42:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:13.568 18:42:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:13.568 18:42:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:13.568 18:42:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:13.568 18:42:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:13.568 18:42:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:13.825 18:42:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:13.825 "name": "raid_bdev1", 00:28:13.825 "uuid": "7e0978b0-865b-4ff2-900e-5e5d003caffc", 00:28:13.825 "strip_size_kb": 0, 00:28:13.825 "state": "online", 00:28:13.825 "raid_level": "raid1", 00:28:13.825 "superblock": true, 00:28:13.825 "num_base_bdevs": 2, 00:28:13.825 "num_base_bdevs_discovered": 1, 00:28:13.825 "num_base_bdevs_operational": 1, 00:28:13.825 "base_bdevs_list": [ 00:28:13.825 { 00:28:13.825 "name": null, 00:28:13.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:13.825 "is_configured": false, 00:28:13.825 "data_offset": 256, 00:28:13.825 "data_size": 7936 00:28:13.825 }, 00:28:13.825 { 00:28:13.825 "name": "BaseBdev2", 00:28:13.825 "uuid": "f81d62d8-f992-579c-b3fa-df93a1761a97", 00:28:13.825 "is_configured": true, 00:28:13.825 "data_offset": 256, 00:28:13.825 "data_size": 7936 00:28:13.825 } 00:28:13.825 ] 00:28:13.825 }' 00:28:13.825 18:42:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:13.825 18:42:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:14.389 18:42:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:14.692 [2024-07-15 18:43:00.001486] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:14.692 [2024-07-15 18:43:00.003752] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d26cd0 00:28:14.692 [2024-07-15 18:43:00.005818] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:14.692 18:43:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # sleep 1 00:28:15.626 18:43:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:15.626 18:43:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:15.626 18:43:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:15.626 18:43:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:15.626 18:43:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:15.626 18:43:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:15.626 18:43:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:15.884 18:43:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:15.884 "name": "raid_bdev1", 00:28:15.884 "uuid": "7e0978b0-865b-4ff2-900e-5e5d003caffc", 00:28:15.884 "strip_size_kb": 0, 00:28:15.884 "state": "online", 00:28:15.884 "raid_level": "raid1", 00:28:15.884 "superblock": true, 00:28:15.884 "num_base_bdevs": 2, 00:28:15.884 "num_base_bdevs_discovered": 2, 00:28:15.884 "num_base_bdevs_operational": 2, 00:28:15.884 "process": { 00:28:15.884 "type": "rebuild", 00:28:15.884 "target": "spare", 00:28:15.884 "progress": { 00:28:15.885 "blocks": 3072, 00:28:15.885 "percent": 38 00:28:15.885 } 00:28:15.885 }, 00:28:15.885 "base_bdevs_list": [ 00:28:15.885 { 00:28:15.885 "name": "spare", 00:28:15.885 "uuid": "9d870fe4-0654-520d-8807-b783b717114f", 00:28:15.885 "is_configured": true, 00:28:15.885 "data_offset": 256, 00:28:15.885 "data_size": 7936 00:28:15.885 }, 00:28:15.885 { 00:28:15.885 "name": "BaseBdev2", 00:28:15.885 "uuid": "f81d62d8-f992-579c-b3fa-df93a1761a97", 00:28:15.885 "is_configured": true, 00:28:15.885 "data_offset": 256, 00:28:15.885 "data_size": 7936 00:28:15.885 } 00:28:15.885 ] 00:28:15.885 }' 00:28:15.885 18:43:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:15.885 18:43:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:15.885 18:43:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:15.885 18:43:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:15.885 18:43:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:16.142 [2024-07-15 18:43:01.619421] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:16.400 [2024-07-15 18:43:01.718836] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:16.400 [2024-07-15 18:43:01.718879] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:16.400 [2024-07-15 18:43:01.718894] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:16.400 [2024-07-15 18:43:01.718900] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:16.400 18:43:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:16.400 18:43:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:16.400 18:43:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:16.400 18:43:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:16.400 18:43:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:16.400 18:43:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:16.400 18:43:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:16.400 18:43:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:16.400 18:43:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:16.400 18:43:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:16.401 18:43:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:16.401 18:43:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:16.659 18:43:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:16.659 "name": "raid_bdev1", 00:28:16.659 "uuid": "7e0978b0-865b-4ff2-900e-5e5d003caffc", 00:28:16.659 "strip_size_kb": 0, 00:28:16.659 "state": "online", 00:28:16.659 "raid_level": "raid1", 00:28:16.659 "superblock": true, 00:28:16.659 "num_base_bdevs": 2, 00:28:16.659 "num_base_bdevs_discovered": 1, 00:28:16.659 "num_base_bdevs_operational": 1, 00:28:16.659 "base_bdevs_list": [ 00:28:16.659 { 00:28:16.659 "name": null, 00:28:16.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.659 "is_configured": false, 00:28:16.659 "data_offset": 256, 00:28:16.659 "data_size": 7936 00:28:16.659 }, 00:28:16.659 { 00:28:16.659 "name": "BaseBdev2", 00:28:16.659 "uuid": "f81d62d8-f992-579c-b3fa-df93a1761a97", 00:28:16.659 "is_configured": true, 00:28:16.659 "data_offset": 256, 00:28:16.659 "data_size": 7936 00:28:16.659 } 00:28:16.659 ] 00:28:16.659 }' 00:28:16.659 18:43:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:16.659 18:43:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:17.225 18:43:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:17.225 18:43:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:17.225 18:43:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:17.225 18:43:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:17.225 18:43:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:17.225 18:43:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:17.225 18:43:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:17.484 18:43:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:17.484 "name": "raid_bdev1", 00:28:17.484 "uuid": "7e0978b0-865b-4ff2-900e-5e5d003caffc", 00:28:17.484 "strip_size_kb": 0, 00:28:17.484 "state": "online", 00:28:17.484 "raid_level": "raid1", 00:28:17.484 "superblock": true, 00:28:17.484 "num_base_bdevs": 2, 00:28:17.484 "num_base_bdevs_discovered": 1, 00:28:17.484 "num_base_bdevs_operational": 1, 00:28:17.484 "base_bdevs_list": [ 00:28:17.484 { 00:28:17.484 "name": null, 00:28:17.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:17.484 "is_configured": false, 00:28:17.484 "data_offset": 256, 00:28:17.484 "data_size": 7936 00:28:17.484 }, 00:28:17.484 { 00:28:17.484 "name": "BaseBdev2", 00:28:17.484 "uuid": "f81d62d8-f992-579c-b3fa-df93a1761a97", 00:28:17.484 "is_configured": true, 00:28:17.484 "data_offset": 256, 00:28:17.484 "data_size": 7936 00:28:17.484 } 00:28:17.484 ] 00:28:17.484 }' 00:28:17.484 18:43:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:17.484 18:43:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:17.484 18:43:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:17.484 18:43:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:17.484 18:43:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:17.742 [2024-07-15 18:43:03.201928] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:17.742 [2024-07-15 18:43:03.204142] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d2ddb0 00:28:17.742 [2024-07-15 18:43:03.205647] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:17.742 18:43:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:18.677 18:43:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:18.677 18:43:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:18.677 18:43:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:18.677 18:43:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:18.677 18:43:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:18.935 18:43:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:18.935 18:43:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:19.194 18:43:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:19.194 "name": "raid_bdev1", 00:28:19.194 "uuid": "7e0978b0-865b-4ff2-900e-5e5d003caffc", 00:28:19.194 "strip_size_kb": 0, 00:28:19.194 "state": "online", 00:28:19.194 "raid_level": "raid1", 00:28:19.194 "superblock": true, 00:28:19.194 "num_base_bdevs": 2, 00:28:19.194 "num_base_bdevs_discovered": 2, 00:28:19.194 "num_base_bdevs_operational": 2, 00:28:19.194 "process": { 00:28:19.194 "type": "rebuild", 00:28:19.194 "target": "spare", 00:28:19.194 "progress": { 00:28:19.194 "blocks": 3072, 00:28:19.194 "percent": 38 00:28:19.194 } 00:28:19.194 }, 00:28:19.194 "base_bdevs_list": [ 00:28:19.194 { 00:28:19.194 "name": "spare", 00:28:19.194 "uuid": "9d870fe4-0654-520d-8807-b783b717114f", 00:28:19.194 "is_configured": true, 00:28:19.194 "data_offset": 256, 00:28:19.194 "data_size": 7936 00:28:19.194 }, 00:28:19.194 { 00:28:19.194 "name": "BaseBdev2", 00:28:19.194 "uuid": "f81d62d8-f992-579c-b3fa-df93a1761a97", 00:28:19.194 "is_configured": true, 00:28:19.194 "data_offset": 256, 00:28:19.194 "data_size": 7936 00:28:19.194 } 00:28:19.194 ] 00:28:19.194 }' 00:28:19.194 18:43:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:19.194 18:43:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:19.194 18:43:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:19.194 18:43:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:19.194 18:43:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:28:19.194 18:43:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:28:19.194 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:28:19.194 18:43:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:28:19.194 18:43:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:28:19.194 18:43:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:28:19.194 18:43:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@705 -- # local timeout=1143 00:28:19.194 18:43:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:19.194 18:43:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:19.194 18:43:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:19.194 18:43:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:19.194 18:43:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:19.194 18:43:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:19.194 18:43:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:19.194 18:43:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:19.453 18:43:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:19.453 "name": "raid_bdev1", 00:28:19.453 "uuid": "7e0978b0-865b-4ff2-900e-5e5d003caffc", 00:28:19.453 "strip_size_kb": 0, 00:28:19.453 "state": "online", 00:28:19.453 "raid_level": "raid1", 00:28:19.453 "superblock": true, 00:28:19.453 "num_base_bdevs": 2, 00:28:19.453 "num_base_bdevs_discovered": 2, 00:28:19.453 "num_base_bdevs_operational": 2, 00:28:19.453 "process": { 00:28:19.453 "type": "rebuild", 00:28:19.453 "target": "spare", 00:28:19.453 "progress": { 00:28:19.453 "blocks": 4096, 00:28:19.453 "percent": 51 00:28:19.453 } 00:28:19.453 }, 00:28:19.453 "base_bdevs_list": [ 00:28:19.453 { 00:28:19.453 "name": "spare", 00:28:19.453 "uuid": "9d870fe4-0654-520d-8807-b783b717114f", 00:28:19.453 "is_configured": true, 00:28:19.453 "data_offset": 256, 00:28:19.453 "data_size": 7936 00:28:19.453 }, 00:28:19.453 { 00:28:19.453 "name": "BaseBdev2", 00:28:19.453 "uuid": "f81d62d8-f992-579c-b3fa-df93a1761a97", 00:28:19.454 "is_configured": true, 00:28:19.454 "data_offset": 256, 00:28:19.454 "data_size": 7936 00:28:19.454 } 00:28:19.454 ] 00:28:19.454 }' 00:28:19.454 18:43:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:19.454 18:43:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:19.454 18:43:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:19.454 18:43:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:19.454 18:43:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:20.830 18:43:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:20.830 18:43:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:20.830 18:43:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:20.830 18:43:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:20.830 18:43:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:20.830 18:43:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:20.830 18:43:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:20.830 18:43:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:20.830 18:43:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:20.830 "name": "raid_bdev1", 00:28:20.830 "uuid": "7e0978b0-865b-4ff2-900e-5e5d003caffc", 00:28:20.830 "strip_size_kb": 0, 00:28:20.830 "state": "online", 00:28:20.830 "raid_level": "raid1", 00:28:20.830 "superblock": true, 00:28:20.830 "num_base_bdevs": 2, 00:28:20.830 "num_base_bdevs_discovered": 2, 00:28:20.830 "num_base_bdevs_operational": 2, 00:28:20.830 "process": { 00:28:20.830 "type": "rebuild", 00:28:20.830 "target": "spare", 00:28:20.830 "progress": { 00:28:20.830 "blocks": 7424, 00:28:20.830 "percent": 93 00:28:20.830 } 00:28:20.830 }, 00:28:20.830 "base_bdevs_list": [ 00:28:20.830 { 00:28:20.830 "name": "spare", 00:28:20.830 "uuid": "9d870fe4-0654-520d-8807-b783b717114f", 00:28:20.830 "is_configured": true, 00:28:20.830 "data_offset": 256, 00:28:20.830 "data_size": 7936 00:28:20.830 }, 00:28:20.830 { 00:28:20.830 "name": "BaseBdev2", 00:28:20.830 "uuid": "f81d62d8-f992-579c-b3fa-df93a1761a97", 00:28:20.830 "is_configured": true, 00:28:20.830 "data_offset": 256, 00:28:20.830 "data_size": 7936 00:28:20.830 } 00:28:20.830 ] 00:28:20.830 }' 00:28:20.830 18:43:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:20.830 18:43:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:20.830 18:43:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:20.830 18:43:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:20.830 18:43:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:20.830 [2024-07-15 18:43:06.329116] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:20.830 [2024-07-15 18:43:06.329172] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:20.830 [2024-07-15 18:43:06.329252] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:21.833 18:43:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:21.833 18:43:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:21.833 18:43:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:21.833 18:43:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:21.833 18:43:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:21.833 18:43:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:21.833 18:43:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:21.833 18:43:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:22.092 18:43:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:22.092 "name": "raid_bdev1", 00:28:22.092 "uuid": "7e0978b0-865b-4ff2-900e-5e5d003caffc", 00:28:22.092 "strip_size_kb": 0, 00:28:22.092 "state": "online", 00:28:22.092 "raid_level": "raid1", 00:28:22.092 "superblock": true, 00:28:22.092 "num_base_bdevs": 2, 00:28:22.092 "num_base_bdevs_discovered": 2, 00:28:22.092 "num_base_bdevs_operational": 2, 00:28:22.092 "base_bdevs_list": [ 00:28:22.092 { 00:28:22.092 "name": "spare", 00:28:22.092 "uuid": "9d870fe4-0654-520d-8807-b783b717114f", 00:28:22.092 "is_configured": true, 00:28:22.092 "data_offset": 256, 00:28:22.092 "data_size": 7936 00:28:22.092 }, 00:28:22.092 { 00:28:22.092 "name": "BaseBdev2", 00:28:22.092 "uuid": "f81d62d8-f992-579c-b3fa-df93a1761a97", 00:28:22.092 "is_configured": true, 00:28:22.092 "data_offset": 256, 00:28:22.092 "data_size": 7936 00:28:22.092 } 00:28:22.092 ] 00:28:22.092 }' 00:28:22.092 18:43:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:22.092 18:43:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:22.092 18:43:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:22.350 18:43:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:28:22.350 18:43:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # break 00:28:22.350 18:43:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:22.350 18:43:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:22.350 18:43:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:22.351 18:43:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:22.351 18:43:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:22.351 18:43:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:22.351 18:43:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:22.610 18:43:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:22.610 "name": "raid_bdev1", 00:28:22.610 "uuid": "7e0978b0-865b-4ff2-900e-5e5d003caffc", 00:28:22.610 "strip_size_kb": 0, 00:28:22.610 "state": "online", 00:28:22.610 "raid_level": "raid1", 00:28:22.610 "superblock": true, 00:28:22.610 "num_base_bdevs": 2, 00:28:22.610 "num_base_bdevs_discovered": 2, 00:28:22.610 "num_base_bdevs_operational": 2, 00:28:22.610 "base_bdevs_list": [ 00:28:22.610 { 00:28:22.610 "name": "spare", 00:28:22.610 "uuid": "9d870fe4-0654-520d-8807-b783b717114f", 00:28:22.610 "is_configured": true, 00:28:22.610 "data_offset": 256, 00:28:22.610 "data_size": 7936 00:28:22.610 }, 00:28:22.610 { 00:28:22.610 "name": "BaseBdev2", 00:28:22.610 "uuid": "f81d62d8-f992-579c-b3fa-df93a1761a97", 00:28:22.610 "is_configured": true, 00:28:22.610 "data_offset": 256, 00:28:22.610 "data_size": 7936 00:28:22.610 } 00:28:22.610 ] 00:28:22.610 }' 00:28:22.610 18:43:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:22.610 18:43:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:22.610 18:43:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:22.610 18:43:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:22.610 18:43:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:22.610 18:43:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:22.610 18:43:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:22.610 18:43:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:22.610 18:43:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:22.610 18:43:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:22.610 18:43:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:22.610 18:43:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:22.610 18:43:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:22.610 18:43:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:22.610 18:43:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:22.610 18:43:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:22.868 18:43:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:22.868 "name": "raid_bdev1", 00:28:22.868 "uuid": "7e0978b0-865b-4ff2-900e-5e5d003caffc", 00:28:22.868 "strip_size_kb": 0, 00:28:22.868 "state": "online", 00:28:22.868 "raid_level": "raid1", 00:28:22.868 "superblock": true, 00:28:22.868 "num_base_bdevs": 2, 00:28:22.868 "num_base_bdevs_discovered": 2, 00:28:22.868 "num_base_bdevs_operational": 2, 00:28:22.868 "base_bdevs_list": [ 00:28:22.868 { 00:28:22.868 "name": "spare", 00:28:22.868 "uuid": "9d870fe4-0654-520d-8807-b783b717114f", 00:28:22.868 "is_configured": true, 00:28:22.868 "data_offset": 256, 00:28:22.868 "data_size": 7936 00:28:22.868 }, 00:28:22.868 { 00:28:22.868 "name": "BaseBdev2", 00:28:22.868 "uuid": "f81d62d8-f992-579c-b3fa-df93a1761a97", 00:28:22.868 "is_configured": true, 00:28:22.868 "data_offset": 256, 00:28:22.868 "data_size": 7936 00:28:22.868 } 00:28:22.868 ] 00:28:22.868 }' 00:28:22.868 18:43:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:22.868 18:43:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:23.802 18:43:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:23.802 [2024-07-15 18:43:09.229075] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:23.802 [2024-07-15 18:43:09.229100] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:23.802 [2024-07-15 18:43:09.229153] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:23.802 [2024-07-15 18:43:09.229204] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:23.802 [2024-07-15 18:43:09.229213] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d2b250 name raid_bdev1, state offline 00:28:23.802 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # jq length 00:28:23.802 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:24.060 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:28:24.060 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:28:24.060 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:28:24.060 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:28:24.060 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:24.060 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:28:24.060 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:24.060 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:24.060 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:24.060 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:28:24.060 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:24.060 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:24.060 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:28:24.319 /dev/nbd0 00:28:24.319 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:24.319 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:24.319 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:28:24.319 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:28:24.319 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:24.319 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:24.319 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:28:24.319 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:28:24.319 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:24.319 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:24.319 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:24.319 1+0 records in 00:28:24.319 1+0 records out 00:28:24.319 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241988 s, 16.9 MB/s 00:28:24.319 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:24.319 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:28:24.319 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:24.319 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:24.319 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:28:24.319 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:24.319 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:24.319 18:43:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:28:24.578 /dev/nbd1 00:28:24.578 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:24.578 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:24.578 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:28:24.578 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:28:24.578 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:24.578 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:24.578 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:28:24.578 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:28:24.578 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:24.578 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:24.578 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:24.578 1+0 records in 00:28:24.578 1+0 records out 00:28:24.578 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259233 s, 15.8 MB/s 00:28:24.578 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:24.578 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:28:24.578 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:24.578 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:24.578 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:28:24.578 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:24.578 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:24.578 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:28:24.836 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:28:24.836 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:24.836 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:24.836 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:24.836 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:28:24.836 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:24.836 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:25.093 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:25.093 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:25.093 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:25.093 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:25.093 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:25.093 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:25.093 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:28:25.093 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:28:25.093 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:25.093 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:28:25.349 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:25.349 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:25.349 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:25.349 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:25.349 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:25.349 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:25.349 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:28:25.349 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:28:25.349 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:28:25.349 18:43:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:25.606 18:43:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:25.864 [2024-07-15 18:43:11.266017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:25.864 [2024-07-15 18:43:11.266060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:25.864 [2024-07-15 18:43:11.266077] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d27850 00:28:25.864 [2024-07-15 18:43:11.266087] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:25.864 [2024-07-15 18:43:11.267626] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:25.864 [2024-07-15 18:43:11.267650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:25.864 [2024-07-15 18:43:11.267711] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:25.864 [2024-07-15 18:43:11.267736] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:25.864 [2024-07-15 18:43:11.267840] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:25.864 spare 00:28:25.864 18:43:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:25.864 18:43:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:25.864 18:43:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:25.864 18:43:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:25.864 18:43:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:25.864 18:43:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:25.864 18:43:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:25.864 18:43:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:25.864 18:43:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:25.864 18:43:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:25.864 18:43:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:25.864 18:43:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:25.864 [2024-07-15 18:43:11.368155] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d2ce20 00:28:25.864 [2024-07-15 18:43:11.368172] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:28:25.864 [2024-07-15 18:43:11.368248] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d29e20 00:28:25.864 [2024-07-15 18:43:11.368372] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d2ce20 00:28:25.864 [2024-07-15 18:43:11.368381] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1d2ce20 00:28:25.864 [2024-07-15 18:43:11.368459] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:26.123 18:43:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:26.123 "name": "raid_bdev1", 00:28:26.123 "uuid": "7e0978b0-865b-4ff2-900e-5e5d003caffc", 00:28:26.123 "strip_size_kb": 0, 00:28:26.123 "state": "online", 00:28:26.123 "raid_level": "raid1", 00:28:26.123 "superblock": true, 00:28:26.123 "num_base_bdevs": 2, 00:28:26.123 "num_base_bdevs_discovered": 2, 00:28:26.123 "num_base_bdevs_operational": 2, 00:28:26.123 "base_bdevs_list": [ 00:28:26.123 { 00:28:26.123 "name": "spare", 00:28:26.123 "uuid": "9d870fe4-0654-520d-8807-b783b717114f", 00:28:26.123 "is_configured": true, 00:28:26.123 "data_offset": 256, 00:28:26.123 "data_size": 7936 00:28:26.123 }, 00:28:26.123 { 00:28:26.123 "name": "BaseBdev2", 00:28:26.123 "uuid": "f81d62d8-f992-579c-b3fa-df93a1761a97", 00:28:26.123 "is_configured": true, 00:28:26.123 "data_offset": 256, 00:28:26.123 "data_size": 7936 00:28:26.123 } 00:28:26.123 ] 00:28:26.123 }' 00:28:26.123 18:43:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:26.123 18:43:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:26.689 18:43:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:26.689 18:43:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:26.689 18:43:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:26.689 18:43:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:26.689 18:43:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:26.689 18:43:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:26.689 18:43:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:26.947 18:43:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:26.947 "name": "raid_bdev1", 00:28:26.947 "uuid": "7e0978b0-865b-4ff2-900e-5e5d003caffc", 00:28:26.947 "strip_size_kb": 0, 00:28:26.947 "state": "online", 00:28:26.947 "raid_level": "raid1", 00:28:26.947 "superblock": true, 00:28:26.947 "num_base_bdevs": 2, 00:28:26.947 "num_base_bdevs_discovered": 2, 00:28:26.947 "num_base_bdevs_operational": 2, 00:28:26.947 "base_bdevs_list": [ 00:28:26.947 { 00:28:26.947 "name": "spare", 00:28:26.947 "uuid": "9d870fe4-0654-520d-8807-b783b717114f", 00:28:26.947 "is_configured": true, 00:28:26.947 "data_offset": 256, 00:28:26.947 "data_size": 7936 00:28:26.947 }, 00:28:26.947 { 00:28:26.947 "name": "BaseBdev2", 00:28:26.947 "uuid": "f81d62d8-f992-579c-b3fa-df93a1761a97", 00:28:26.947 "is_configured": true, 00:28:26.947 "data_offset": 256, 00:28:26.947 "data_size": 7936 00:28:26.947 } 00:28:26.947 ] 00:28:26.947 }' 00:28:26.947 18:43:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:26.947 18:43:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:26.947 18:43:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:26.947 18:43:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:26.947 18:43:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:26.947 18:43:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:28:27.206 18:43:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:28:27.206 18:43:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:27.465 [2024-07-15 18:43:12.978763] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:27.465 18:43:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:27.465 18:43:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:27.465 18:43:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:27.465 18:43:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:27.465 18:43:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:27.465 18:43:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:27.465 18:43:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:27.465 18:43:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:27.465 18:43:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:27.465 18:43:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:27.465 18:43:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:27.465 18:43:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:27.724 18:43:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:27.724 "name": "raid_bdev1", 00:28:27.724 "uuid": "7e0978b0-865b-4ff2-900e-5e5d003caffc", 00:28:27.724 "strip_size_kb": 0, 00:28:27.724 "state": "online", 00:28:27.724 "raid_level": "raid1", 00:28:27.724 "superblock": true, 00:28:27.724 "num_base_bdevs": 2, 00:28:27.724 "num_base_bdevs_discovered": 1, 00:28:27.724 "num_base_bdevs_operational": 1, 00:28:27.724 "base_bdevs_list": [ 00:28:27.724 { 00:28:27.724 "name": null, 00:28:27.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:27.724 "is_configured": false, 00:28:27.724 "data_offset": 256, 00:28:27.724 "data_size": 7936 00:28:27.724 }, 00:28:27.724 { 00:28:27.724 "name": "BaseBdev2", 00:28:27.724 "uuid": "f81d62d8-f992-579c-b3fa-df93a1761a97", 00:28:27.724 "is_configured": true, 00:28:27.724 "data_offset": 256, 00:28:27.724 "data_size": 7936 00:28:27.724 } 00:28:27.724 ] 00:28:27.724 }' 00:28:27.724 18:43:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:27.724 18:43:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:28.657 18:43:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:28.657 [2024-07-15 18:43:14.182039] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:28.657 [2024-07-15 18:43:14.182184] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:28.657 [2024-07-15 18:43:14.182199] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:28.657 [2024-07-15 18:43:14.182223] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:28.657 [2024-07-15 18:43:14.184362] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d2fe00 00:28:28.657 [2024-07-15 18:43:14.186569] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:28.657 18:43:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # sleep 1 00:28:30.035 18:43:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:30.035 18:43:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:30.035 18:43:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:30.035 18:43:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:30.035 18:43:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:30.035 18:43:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:30.035 18:43:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:30.035 18:43:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:30.035 "name": "raid_bdev1", 00:28:30.035 "uuid": "7e0978b0-865b-4ff2-900e-5e5d003caffc", 00:28:30.035 "strip_size_kb": 0, 00:28:30.035 "state": "online", 00:28:30.035 "raid_level": "raid1", 00:28:30.035 "superblock": true, 00:28:30.035 "num_base_bdevs": 2, 00:28:30.035 "num_base_bdevs_discovered": 2, 00:28:30.035 "num_base_bdevs_operational": 2, 00:28:30.035 "process": { 00:28:30.035 "type": "rebuild", 00:28:30.035 "target": "spare", 00:28:30.035 "progress": { 00:28:30.035 "blocks": 3072, 00:28:30.035 "percent": 38 00:28:30.035 } 00:28:30.035 }, 00:28:30.035 "base_bdevs_list": [ 00:28:30.035 { 00:28:30.035 "name": "spare", 00:28:30.035 "uuid": "9d870fe4-0654-520d-8807-b783b717114f", 00:28:30.035 "is_configured": true, 00:28:30.035 "data_offset": 256, 00:28:30.035 "data_size": 7936 00:28:30.035 }, 00:28:30.035 { 00:28:30.035 "name": "BaseBdev2", 00:28:30.035 "uuid": "f81d62d8-f992-579c-b3fa-df93a1761a97", 00:28:30.035 "is_configured": true, 00:28:30.035 "data_offset": 256, 00:28:30.035 "data_size": 7936 00:28:30.035 } 00:28:30.035 ] 00:28:30.035 }' 00:28:30.035 18:43:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:30.035 18:43:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:30.035 18:43:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:30.035 18:43:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:30.294 18:43:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:30.294 [2024-07-15 18:43:15.820375] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:30.553 [2024-07-15 18:43:15.899638] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:30.553 [2024-07-15 18:43:15.899682] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:30.553 [2024-07-15 18:43:15.899696] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:30.553 [2024-07-15 18:43:15.899703] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:30.553 18:43:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:30.553 18:43:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:30.553 18:43:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:30.553 18:43:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:30.553 18:43:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:30.553 18:43:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:30.553 18:43:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:30.553 18:43:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:30.553 18:43:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:30.553 18:43:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:30.553 18:43:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:30.553 18:43:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:30.813 18:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:30.813 "name": "raid_bdev1", 00:28:30.813 "uuid": "7e0978b0-865b-4ff2-900e-5e5d003caffc", 00:28:30.813 "strip_size_kb": 0, 00:28:30.813 "state": "online", 00:28:30.813 "raid_level": "raid1", 00:28:30.813 "superblock": true, 00:28:30.813 "num_base_bdevs": 2, 00:28:30.813 "num_base_bdevs_discovered": 1, 00:28:30.813 "num_base_bdevs_operational": 1, 00:28:30.813 "base_bdevs_list": [ 00:28:30.813 { 00:28:30.813 "name": null, 00:28:30.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:30.813 "is_configured": false, 00:28:30.813 "data_offset": 256, 00:28:30.813 "data_size": 7936 00:28:30.813 }, 00:28:30.813 { 00:28:30.813 "name": "BaseBdev2", 00:28:30.813 "uuid": "f81d62d8-f992-579c-b3fa-df93a1761a97", 00:28:30.813 "is_configured": true, 00:28:30.813 "data_offset": 256, 00:28:30.813 "data_size": 7936 00:28:30.813 } 00:28:30.813 ] 00:28:30.813 }' 00:28:30.813 18:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:30.813 18:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:31.380 18:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:31.639 [2024-07-15 18:43:17.033760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:31.639 [2024-07-15 18:43:17.033804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:31.639 [2024-07-15 18:43:17.033823] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d2f830 00:28:31.639 [2024-07-15 18:43:17.033833] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:31.639 [2024-07-15 18:43:17.034052] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:31.639 [2024-07-15 18:43:17.034070] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:31.639 [2024-07-15 18:43:17.034127] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:31.639 [2024-07-15 18:43:17.034138] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:31.639 [2024-07-15 18:43:17.034148] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:31.639 [2024-07-15 18:43:17.034163] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:31.639 [2024-07-15 18:43:17.036309] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d2fe00 00:28:31.639 [2024-07-15 18:43:17.037821] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:31.639 spare 00:28:31.639 18:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # sleep 1 00:28:32.575 18:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:32.575 18:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:32.575 18:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:32.575 18:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:32.575 18:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:32.575 18:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:32.575 18:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:32.834 18:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:32.834 "name": "raid_bdev1", 00:28:32.834 "uuid": "7e0978b0-865b-4ff2-900e-5e5d003caffc", 00:28:32.834 "strip_size_kb": 0, 00:28:32.834 "state": "online", 00:28:32.834 "raid_level": "raid1", 00:28:32.834 "superblock": true, 00:28:32.834 "num_base_bdevs": 2, 00:28:32.834 "num_base_bdevs_discovered": 2, 00:28:32.834 "num_base_bdevs_operational": 2, 00:28:32.834 "process": { 00:28:32.834 "type": "rebuild", 00:28:32.834 "target": "spare", 00:28:32.834 "progress": { 00:28:32.834 "blocks": 3072, 00:28:32.834 "percent": 38 00:28:32.834 } 00:28:32.834 }, 00:28:32.834 "base_bdevs_list": [ 00:28:32.834 { 00:28:32.834 "name": "spare", 00:28:32.834 "uuid": "9d870fe4-0654-520d-8807-b783b717114f", 00:28:32.834 "is_configured": true, 00:28:32.834 "data_offset": 256, 00:28:32.834 "data_size": 7936 00:28:32.834 }, 00:28:32.834 { 00:28:32.834 "name": "BaseBdev2", 00:28:32.834 "uuid": "f81d62d8-f992-579c-b3fa-df93a1761a97", 00:28:32.834 "is_configured": true, 00:28:32.834 "data_offset": 256, 00:28:32.834 "data_size": 7936 00:28:32.834 } 00:28:32.834 ] 00:28:32.834 }' 00:28:32.834 18:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:32.834 18:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:32.834 18:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:33.093 18:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:33.093 18:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:33.351 [2024-07-15 18:43:18.651386] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:33.352 [2024-07-15 18:43:18.750800] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:33.352 [2024-07-15 18:43:18.750843] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:33.352 [2024-07-15 18:43:18.750857] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:33.352 [2024-07-15 18:43:18.750863] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:33.352 18:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:33.352 18:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:33.352 18:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:33.352 18:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:33.352 18:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:33.352 18:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:33.352 18:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:33.352 18:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:33.352 18:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:33.352 18:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:33.352 18:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:33.352 18:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:33.611 18:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:33.611 "name": "raid_bdev1", 00:28:33.611 "uuid": "7e0978b0-865b-4ff2-900e-5e5d003caffc", 00:28:33.611 "strip_size_kb": 0, 00:28:33.611 "state": "online", 00:28:33.611 "raid_level": "raid1", 00:28:33.611 "superblock": true, 00:28:33.611 "num_base_bdevs": 2, 00:28:33.611 "num_base_bdevs_discovered": 1, 00:28:33.611 "num_base_bdevs_operational": 1, 00:28:33.611 "base_bdevs_list": [ 00:28:33.611 { 00:28:33.611 "name": null, 00:28:33.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:33.611 "is_configured": false, 00:28:33.611 "data_offset": 256, 00:28:33.611 "data_size": 7936 00:28:33.611 }, 00:28:33.611 { 00:28:33.611 "name": "BaseBdev2", 00:28:33.611 "uuid": "f81d62d8-f992-579c-b3fa-df93a1761a97", 00:28:33.611 "is_configured": true, 00:28:33.611 "data_offset": 256, 00:28:33.611 "data_size": 7936 00:28:33.611 } 00:28:33.611 ] 00:28:33.611 }' 00:28:33.611 18:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:33.611 18:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:34.546 18:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:34.546 18:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:34.546 18:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:34.546 18:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:34.546 18:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:34.546 18:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:34.546 18:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:34.546 18:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:34.547 "name": "raid_bdev1", 00:28:34.547 "uuid": "7e0978b0-865b-4ff2-900e-5e5d003caffc", 00:28:34.547 "strip_size_kb": 0, 00:28:34.547 "state": "online", 00:28:34.547 "raid_level": "raid1", 00:28:34.547 "superblock": true, 00:28:34.547 "num_base_bdevs": 2, 00:28:34.547 "num_base_bdevs_discovered": 1, 00:28:34.547 "num_base_bdevs_operational": 1, 00:28:34.547 "base_bdevs_list": [ 00:28:34.547 { 00:28:34.547 "name": null, 00:28:34.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:34.547 "is_configured": false, 00:28:34.547 "data_offset": 256, 00:28:34.547 "data_size": 7936 00:28:34.547 }, 00:28:34.547 { 00:28:34.547 "name": "BaseBdev2", 00:28:34.547 "uuid": "f81d62d8-f992-579c-b3fa-df93a1761a97", 00:28:34.547 "is_configured": true, 00:28:34.547 "data_offset": 256, 00:28:34.547 "data_size": 7936 00:28:34.547 } 00:28:34.547 ] 00:28:34.547 }' 00:28:34.547 18:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:34.547 18:43:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:34.547 18:43:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:34.547 18:43:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:34.547 18:43:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:28:34.805 18:43:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:35.063 [2024-07-15 18:43:20.506572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:35.064 [2024-07-15 18:43:20.506615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:35.064 [2024-07-15 18:43:20.506632] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d30cf0 00:28:35.064 [2024-07-15 18:43:20.506647] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:35.064 [2024-07-15 18:43:20.506825] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:35.064 [2024-07-15 18:43:20.506841] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:35.064 [2024-07-15 18:43:20.506885] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:28:35.064 [2024-07-15 18:43:20.506895] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:35.064 [2024-07-15 18:43:20.506903] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:35.064 BaseBdev1 00:28:35.064 18:43:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # sleep 1 00:28:35.999 18:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:35.999 18:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:35.999 18:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:35.999 18:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:35.999 18:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:35.999 18:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:35.999 18:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:35.999 18:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:35.999 18:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:35.999 18:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:35.999 18:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:35.999 18:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:36.258 18:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:36.258 "name": "raid_bdev1", 00:28:36.258 "uuid": "7e0978b0-865b-4ff2-900e-5e5d003caffc", 00:28:36.258 "strip_size_kb": 0, 00:28:36.258 "state": "online", 00:28:36.258 "raid_level": "raid1", 00:28:36.258 "superblock": true, 00:28:36.258 "num_base_bdevs": 2, 00:28:36.258 "num_base_bdevs_discovered": 1, 00:28:36.258 "num_base_bdevs_operational": 1, 00:28:36.258 "base_bdevs_list": [ 00:28:36.258 { 00:28:36.258 "name": null, 00:28:36.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:36.258 "is_configured": false, 00:28:36.258 "data_offset": 256, 00:28:36.258 "data_size": 7936 00:28:36.258 }, 00:28:36.258 { 00:28:36.258 "name": "BaseBdev2", 00:28:36.258 "uuid": "f81d62d8-f992-579c-b3fa-df93a1761a97", 00:28:36.258 "is_configured": true, 00:28:36.258 "data_offset": 256, 00:28:36.258 "data_size": 7936 00:28:36.258 } 00:28:36.258 ] 00:28:36.258 }' 00:28:36.258 18:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:36.258 18:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:37.256 18:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:37.256 18:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:37.256 18:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:37.256 18:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:37.256 18:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:37.256 18:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:37.256 18:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:37.256 18:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:37.256 "name": "raid_bdev1", 00:28:37.256 "uuid": "7e0978b0-865b-4ff2-900e-5e5d003caffc", 00:28:37.256 "strip_size_kb": 0, 00:28:37.256 "state": "online", 00:28:37.256 "raid_level": "raid1", 00:28:37.256 "superblock": true, 00:28:37.256 "num_base_bdevs": 2, 00:28:37.256 "num_base_bdevs_discovered": 1, 00:28:37.256 "num_base_bdevs_operational": 1, 00:28:37.256 "base_bdevs_list": [ 00:28:37.256 { 00:28:37.256 "name": null, 00:28:37.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:37.256 "is_configured": false, 00:28:37.256 "data_offset": 256, 00:28:37.256 "data_size": 7936 00:28:37.256 }, 00:28:37.256 { 00:28:37.256 "name": "BaseBdev2", 00:28:37.256 "uuid": "f81d62d8-f992-579c-b3fa-df93a1761a97", 00:28:37.256 "is_configured": true, 00:28:37.256 "data_offset": 256, 00:28:37.256 "data_size": 7936 00:28:37.256 } 00:28:37.256 ] 00:28:37.256 }' 00:28:37.256 18:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:37.256 18:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:37.256 18:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:37.256 18:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:37.256 18:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:37.256 18:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:28:37.256 18:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:37.256 18:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:37.256 18:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:37.256 18:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:37.256 18:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:37.256 18:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:37.256 18:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:37.256 18:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:37.257 18:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:28:37.257 18:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:37.515 [2024-07-15 18:43:23.021424] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:37.515 [2024-07-15 18:43:23.021537] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:37.515 [2024-07-15 18:43:23.021551] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:37.515 request: 00:28:37.515 { 00:28:37.515 "base_bdev": "BaseBdev1", 00:28:37.515 "raid_bdev": "raid_bdev1", 00:28:37.515 "method": "bdev_raid_add_base_bdev", 00:28:37.515 "req_id": 1 00:28:37.515 } 00:28:37.515 Got JSON-RPC error response 00:28:37.515 response: 00:28:37.515 { 00:28:37.515 "code": -22, 00:28:37.516 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:28:37.516 } 00:28:37.516 18:43:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # es=1 00:28:37.516 18:43:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:37.516 18:43:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:37.516 18:43:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:37.516 18:43:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # sleep 1 00:28:38.891 18:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:38.891 18:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:38.892 18:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:38.892 18:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:38.892 18:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:38.892 18:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:38.892 18:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:38.892 18:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:38.892 18:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:38.892 18:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:38.892 18:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:38.892 18:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:38.892 18:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:38.892 "name": "raid_bdev1", 00:28:38.892 "uuid": "7e0978b0-865b-4ff2-900e-5e5d003caffc", 00:28:38.892 "strip_size_kb": 0, 00:28:38.892 "state": "online", 00:28:38.892 "raid_level": "raid1", 00:28:38.892 "superblock": true, 00:28:38.892 "num_base_bdevs": 2, 00:28:38.892 "num_base_bdevs_discovered": 1, 00:28:38.892 "num_base_bdevs_operational": 1, 00:28:38.892 "base_bdevs_list": [ 00:28:38.892 { 00:28:38.892 "name": null, 00:28:38.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:38.892 "is_configured": false, 00:28:38.892 "data_offset": 256, 00:28:38.892 "data_size": 7936 00:28:38.892 }, 00:28:38.892 { 00:28:38.892 "name": "BaseBdev2", 00:28:38.892 "uuid": "f81d62d8-f992-579c-b3fa-df93a1761a97", 00:28:38.892 "is_configured": true, 00:28:38.892 "data_offset": 256, 00:28:38.892 "data_size": 7936 00:28:38.892 } 00:28:38.892 ] 00:28:38.892 }' 00:28:38.892 18:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:38.892 18:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:39.827 18:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:39.827 18:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:39.827 18:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:39.827 18:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:39.827 18:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:39.827 18:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:39.827 18:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:40.086 18:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:40.086 "name": "raid_bdev1", 00:28:40.086 "uuid": "7e0978b0-865b-4ff2-900e-5e5d003caffc", 00:28:40.086 "strip_size_kb": 0, 00:28:40.086 "state": "online", 00:28:40.086 "raid_level": "raid1", 00:28:40.086 "superblock": true, 00:28:40.086 "num_base_bdevs": 2, 00:28:40.086 "num_base_bdevs_discovered": 1, 00:28:40.086 "num_base_bdevs_operational": 1, 00:28:40.086 "base_bdevs_list": [ 00:28:40.086 { 00:28:40.086 "name": null, 00:28:40.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:40.086 "is_configured": false, 00:28:40.086 "data_offset": 256, 00:28:40.086 "data_size": 7936 00:28:40.086 }, 00:28:40.086 { 00:28:40.086 "name": "BaseBdev2", 00:28:40.086 "uuid": "f81d62d8-f992-579c-b3fa-df93a1761a97", 00:28:40.086 "is_configured": true, 00:28:40.086 "data_offset": 256, 00:28:40.086 "data_size": 7936 00:28:40.086 } 00:28:40.086 ] 00:28:40.086 }' 00:28:40.086 18:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:40.086 18:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:40.086 18:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:40.086 18:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:40.086 18:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@782 -- # killprocess 2943757 00:28:40.086 18:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 2943757 ']' 00:28:40.086 18:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 2943757 00:28:40.086 18:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:28:40.086 18:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:40.086 18:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2943757 00:28:40.086 18:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:40.086 18:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:40.086 18:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2943757' 00:28:40.086 killing process with pid 2943757 00:28:40.086 18:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 2943757 00:28:40.086 Received shutdown signal, test time was about 60.000000 seconds 00:28:40.086 00:28:40.086 Latency(us) 00:28:40.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.086 =================================================================================================================== 00:28:40.086 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:40.086 [2024-07-15 18:43:25.594599] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:40.086 [2024-07-15 18:43:25.594686] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:40.086 [2024-07-15 18:43:25.594729] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:40.086 [2024-07-15 18:43:25.594740] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d2ce20 name raid_bdev1, state offline 00:28:40.086 18:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 2943757 00:28:40.086 [2024-07-15 18:43:25.626662] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:40.346 18:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # return 0 00:28:40.346 00:28:40.346 real 0m34.125s 00:28:40.346 user 0m55.209s 00:28:40.346 sys 0m4.260s 00:28:40.346 18:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:40.346 18:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:40.346 ************************************ 00:28:40.346 END TEST raid_rebuild_test_sb_md_separate 00:28:40.346 ************************************ 00:28:40.346 18:43:25 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:28:40.346 18:43:25 bdev_raid -- bdev/bdev_raid.sh@911 -- # base_malloc_params='-m 32 -i' 00:28:40.346 18:43:25 bdev_raid -- bdev/bdev_raid.sh@912 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:28:40.346 18:43:25 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:28:40.346 18:43:25 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:40.346 18:43:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:40.346 ************************************ 00:28:40.346 START TEST raid_state_function_test_sb_md_interleaved 00:28:40.346 ************************************ 00:28:40.346 18:43:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:28:40.346 18:43:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:28:40.346 18:43:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:28:40.346 18:43:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:28:40.346 18:43:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:28:40.346 18:43:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:28:40.346 18:43:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:40.346 18:43:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:28:40.346 18:43:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:40.346 18:43:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:40.346 18:43:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:28:40.346 18:43:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:40.346 18:43:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:40.346 18:43:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:40.346 18:43:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:28:40.346 18:43:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:28:40.346 18:43:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:28:40.346 18:43:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:28:40.346 18:43:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:28:40.346 18:43:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:28:40.346 18:43:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:28:40.346 18:43:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:28:40.346 18:43:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:28:40.346 18:43:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=2949340 00:28:40.346 18:43:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2949340' 00:28:40.346 Process raid pid: 2949340 00:28:40.346 18:43:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:28:40.346 18:43:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 2949340 /var/tmp/spdk-raid.sock 00:28:40.346 18:43:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 2949340 ']' 00:28:40.346 18:43:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:40.346 18:43:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:40.346 18:43:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:40.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:40.346 18:43:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:40.346 18:43:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:40.606 [2024-07-15 18:43:25.931280] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:28:40.606 [2024-07-15 18:43:25.931342] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:40.606 [2024-07-15 18:43:26.028953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.606 [2024-07-15 18:43:26.123386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.864 [2024-07-15 18:43:26.178828] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:40.864 [2024-07-15 18:43:26.178853] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:41.432 18:43:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:41.432 18:43:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:28:41.432 18:43:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:28:41.691 [2024-07-15 18:43:27.177693] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:41.691 [2024-07-15 18:43:27.177731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:41.691 [2024-07-15 18:43:27.177741] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:41.691 [2024-07-15 18:43:27.177750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:41.691 18:43:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:28:41.691 18:43:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:41.691 18:43:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:41.691 18:43:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:41.691 18:43:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:41.691 18:43:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:41.691 18:43:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:41.691 18:43:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:41.691 18:43:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:41.691 18:43:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:41.691 18:43:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:41.691 18:43:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:42.258 18:43:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:42.259 "name": "Existed_Raid", 00:28:42.259 "uuid": "9fdefcb9-4b53-492a-9420-3e324e497795", 00:28:42.259 "strip_size_kb": 0, 00:28:42.259 "state": "configuring", 00:28:42.259 "raid_level": "raid1", 00:28:42.259 "superblock": true, 00:28:42.259 "num_base_bdevs": 2, 00:28:42.259 "num_base_bdevs_discovered": 0, 00:28:42.259 "num_base_bdevs_operational": 2, 00:28:42.259 "base_bdevs_list": [ 00:28:42.259 { 00:28:42.259 "name": "BaseBdev1", 00:28:42.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:42.259 "is_configured": false, 00:28:42.259 "data_offset": 0, 00:28:42.259 "data_size": 0 00:28:42.259 }, 00:28:42.259 { 00:28:42.259 "name": "BaseBdev2", 00:28:42.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:42.259 "is_configured": false, 00:28:42.259 "data_offset": 0, 00:28:42.259 "data_size": 0 00:28:42.259 } 00:28:42.259 ] 00:28:42.259 }' 00:28:42.259 18:43:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:42.259 18:43:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:42.827 18:43:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:43.394 [2024-07-15 18:43:28.833969] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:43.394 [2024-07-15 18:43:28.834000] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16fcb80 name Existed_Raid, state configuring 00:28:43.394 18:43:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:28:43.651 [2024-07-15 18:43:29.158861] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:43.651 [2024-07-15 18:43:29.158893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:43.651 [2024-07-15 18:43:29.158901] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:43.651 [2024-07-15 18:43:29.158910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:43.651 18:43:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:28:44.218 [2024-07-15 18:43:29.657772] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:44.218 BaseBdev1 00:28:44.218 18:43:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:28:44.218 18:43:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:28:44.218 18:43:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:44.218 18:43:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:28:44.218 18:43:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:44.218 18:43:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:44.218 18:43:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:44.477 18:43:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:45.045 [ 00:28:45.045 { 00:28:45.045 "name": "BaseBdev1", 00:28:45.045 "aliases": [ 00:28:45.045 "6adb4344-8435-4d24-add0-6f5c4ed91df5" 00:28:45.045 ], 00:28:45.045 "product_name": "Malloc disk", 00:28:45.045 "block_size": 4128, 00:28:45.045 "num_blocks": 8192, 00:28:45.045 "uuid": "6adb4344-8435-4d24-add0-6f5c4ed91df5", 00:28:45.045 "md_size": 32, 00:28:45.045 "md_interleave": true, 00:28:45.045 "dif_type": 0, 00:28:45.045 "assigned_rate_limits": { 00:28:45.045 "rw_ios_per_sec": 0, 00:28:45.045 "rw_mbytes_per_sec": 0, 00:28:45.045 "r_mbytes_per_sec": 0, 00:28:45.045 "w_mbytes_per_sec": 0 00:28:45.045 }, 00:28:45.045 "claimed": true, 00:28:45.045 "claim_type": "exclusive_write", 00:28:45.045 "zoned": false, 00:28:45.045 "supported_io_types": { 00:28:45.045 "read": true, 00:28:45.045 "write": true, 00:28:45.045 "unmap": true, 00:28:45.045 "flush": true, 00:28:45.045 "reset": true, 00:28:45.045 "nvme_admin": false, 00:28:45.045 "nvme_io": false, 00:28:45.045 "nvme_io_md": false, 00:28:45.045 "write_zeroes": true, 00:28:45.045 "zcopy": true, 00:28:45.045 "get_zone_info": false, 00:28:45.045 "zone_management": false, 00:28:45.045 "zone_append": false, 00:28:45.045 "compare": false, 00:28:45.045 "compare_and_write": false, 00:28:45.045 "abort": true, 00:28:45.045 "seek_hole": false, 00:28:45.045 "seek_data": false, 00:28:45.045 "copy": true, 00:28:45.045 "nvme_iov_md": false 00:28:45.045 }, 00:28:45.045 "memory_domains": [ 00:28:45.045 { 00:28:45.045 "dma_device_id": "system", 00:28:45.045 "dma_device_type": 1 00:28:45.045 }, 00:28:45.045 { 00:28:45.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:45.045 "dma_device_type": 2 00:28:45.045 } 00:28:45.045 ], 00:28:45.045 "driver_specific": {} 00:28:45.045 } 00:28:45.045 ] 00:28:45.045 18:43:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:28:45.045 18:43:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:28:45.045 18:43:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:45.045 18:43:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:45.045 18:43:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:45.045 18:43:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:45.045 18:43:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:45.045 18:43:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:45.045 18:43:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:45.045 18:43:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:45.045 18:43:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:45.045 18:43:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:45.045 18:43:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:45.304 18:43:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:45.304 "name": "Existed_Raid", 00:28:45.304 "uuid": "2822a2d0-24e8-4128-9a88-bf315e03772b", 00:28:45.304 "strip_size_kb": 0, 00:28:45.304 "state": "configuring", 00:28:45.304 "raid_level": "raid1", 00:28:45.304 "superblock": true, 00:28:45.304 "num_base_bdevs": 2, 00:28:45.304 "num_base_bdevs_discovered": 1, 00:28:45.304 "num_base_bdevs_operational": 2, 00:28:45.304 "base_bdevs_list": [ 00:28:45.304 { 00:28:45.304 "name": "BaseBdev1", 00:28:45.304 "uuid": "6adb4344-8435-4d24-add0-6f5c4ed91df5", 00:28:45.304 "is_configured": true, 00:28:45.304 "data_offset": 256, 00:28:45.304 "data_size": 7936 00:28:45.304 }, 00:28:45.304 { 00:28:45.304 "name": "BaseBdev2", 00:28:45.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:45.304 "is_configured": false, 00:28:45.304 "data_offset": 0, 00:28:45.304 "data_size": 0 00:28:45.304 } 00:28:45.304 ] 00:28:45.304 }' 00:28:45.304 18:43:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:45.304 18:43:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:46.241 18:43:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:46.500 [2024-07-15 18:43:31.907843] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:46.500 [2024-07-15 18:43:31.907881] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16fc470 name Existed_Raid, state configuring 00:28:46.500 18:43:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:28:46.758 [2024-07-15 18:43:32.168578] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:46.758 [2024-07-15 18:43:32.170107] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:46.758 [2024-07-15 18:43:32.170138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:46.758 18:43:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:28:46.758 18:43:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:46.758 18:43:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:28:46.758 18:43:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:46.758 18:43:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:46.758 18:43:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:46.758 18:43:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:46.758 18:43:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:46.758 18:43:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:46.758 18:43:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:46.758 18:43:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:46.759 18:43:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:46.759 18:43:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:46.759 18:43:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:47.018 18:43:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:47.018 "name": "Existed_Raid", 00:28:47.018 "uuid": "44f44074-0cf9-46af-bb6d-351293e561ac", 00:28:47.018 "strip_size_kb": 0, 00:28:47.018 "state": "configuring", 00:28:47.018 "raid_level": "raid1", 00:28:47.018 "superblock": true, 00:28:47.018 "num_base_bdevs": 2, 00:28:47.018 "num_base_bdevs_discovered": 1, 00:28:47.018 "num_base_bdevs_operational": 2, 00:28:47.018 "base_bdevs_list": [ 00:28:47.018 { 00:28:47.018 "name": "BaseBdev1", 00:28:47.018 "uuid": "6adb4344-8435-4d24-add0-6f5c4ed91df5", 00:28:47.018 "is_configured": true, 00:28:47.018 "data_offset": 256, 00:28:47.018 "data_size": 7936 00:28:47.018 }, 00:28:47.018 { 00:28:47.018 "name": "BaseBdev2", 00:28:47.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:47.018 "is_configured": false, 00:28:47.018 "data_offset": 0, 00:28:47.018 "data_size": 0 00:28:47.018 } 00:28:47.018 ] 00:28:47.018 }' 00:28:47.018 18:43:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:47.018 18:43:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:47.585 18:43:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:28:48.173 [2024-07-15 18:43:33.587853] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:48.173 [2024-07-15 18:43:33.587991] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x16fbc80 00:28:48.173 [2024-07-15 18:43:33.588004] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:48.173 [2024-07-15 18:43:33.588064] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x177e520 00:28:48.173 [2024-07-15 18:43:33.588139] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x16fbc80 00:28:48.173 [2024-07-15 18:43:33.588147] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x16fbc80 00:28:48.173 [2024-07-15 18:43:33.588201] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:48.173 BaseBdev2 00:28:48.173 18:43:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:28:48.173 18:43:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:28:48.173 18:43:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:48.173 18:43:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:28:48.173 18:43:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:48.173 18:43:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:48.173 18:43:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:48.431 18:43:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:48.689 [ 00:28:48.689 { 00:28:48.689 "name": "BaseBdev2", 00:28:48.689 "aliases": [ 00:28:48.689 "0b3ad2cc-0b28-4fcd-a204-386398e56476" 00:28:48.689 ], 00:28:48.689 "product_name": "Malloc disk", 00:28:48.689 "block_size": 4128, 00:28:48.689 "num_blocks": 8192, 00:28:48.689 "uuid": "0b3ad2cc-0b28-4fcd-a204-386398e56476", 00:28:48.689 "md_size": 32, 00:28:48.689 "md_interleave": true, 00:28:48.689 "dif_type": 0, 00:28:48.689 "assigned_rate_limits": { 00:28:48.689 "rw_ios_per_sec": 0, 00:28:48.689 "rw_mbytes_per_sec": 0, 00:28:48.689 "r_mbytes_per_sec": 0, 00:28:48.689 "w_mbytes_per_sec": 0 00:28:48.689 }, 00:28:48.689 "claimed": true, 00:28:48.689 "claim_type": "exclusive_write", 00:28:48.689 "zoned": false, 00:28:48.689 "supported_io_types": { 00:28:48.689 "read": true, 00:28:48.689 "write": true, 00:28:48.689 "unmap": true, 00:28:48.689 "flush": true, 00:28:48.689 "reset": true, 00:28:48.689 "nvme_admin": false, 00:28:48.689 "nvme_io": false, 00:28:48.689 "nvme_io_md": false, 00:28:48.689 "write_zeroes": true, 00:28:48.689 "zcopy": true, 00:28:48.689 "get_zone_info": false, 00:28:48.689 "zone_management": false, 00:28:48.689 "zone_append": false, 00:28:48.689 "compare": false, 00:28:48.689 "compare_and_write": false, 00:28:48.689 "abort": true, 00:28:48.689 "seek_hole": false, 00:28:48.689 "seek_data": false, 00:28:48.689 "copy": true, 00:28:48.689 "nvme_iov_md": false 00:28:48.689 }, 00:28:48.689 "memory_domains": [ 00:28:48.689 { 00:28:48.689 "dma_device_id": "system", 00:28:48.689 "dma_device_type": 1 00:28:48.689 }, 00:28:48.689 { 00:28:48.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:48.689 "dma_device_type": 2 00:28:48.689 } 00:28:48.689 ], 00:28:48.689 "driver_specific": {} 00:28:48.689 } 00:28:48.689 ] 00:28:48.947 18:43:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:28:48.947 18:43:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:28:48.947 18:43:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:48.947 18:43:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:28:48.947 18:43:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:48.947 18:43:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:48.947 18:43:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:48.947 18:43:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:48.947 18:43:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:48.947 18:43:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:48.947 18:43:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:48.947 18:43:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:48.947 18:43:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:48.947 18:43:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:48.947 18:43:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:49.204 18:43:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:49.204 "name": "Existed_Raid", 00:28:49.204 "uuid": "44f44074-0cf9-46af-bb6d-351293e561ac", 00:28:49.204 "strip_size_kb": 0, 00:28:49.204 "state": "online", 00:28:49.204 "raid_level": "raid1", 00:28:49.204 "superblock": true, 00:28:49.204 "num_base_bdevs": 2, 00:28:49.204 "num_base_bdevs_discovered": 2, 00:28:49.204 "num_base_bdevs_operational": 2, 00:28:49.204 "base_bdevs_list": [ 00:28:49.204 { 00:28:49.204 "name": "BaseBdev1", 00:28:49.204 "uuid": "6adb4344-8435-4d24-add0-6f5c4ed91df5", 00:28:49.204 "is_configured": true, 00:28:49.204 "data_offset": 256, 00:28:49.205 "data_size": 7936 00:28:49.205 }, 00:28:49.205 { 00:28:49.205 "name": "BaseBdev2", 00:28:49.205 "uuid": "0b3ad2cc-0b28-4fcd-a204-386398e56476", 00:28:49.205 "is_configured": true, 00:28:49.205 "data_offset": 256, 00:28:49.205 "data_size": 7936 00:28:49.205 } 00:28:49.205 ] 00:28:49.205 }' 00:28:49.205 18:43:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:49.205 18:43:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:50.137 18:43:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:28:50.137 18:43:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:28:50.137 18:43:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:50.137 18:43:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:50.137 18:43:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:50.137 18:43:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:28:50.137 18:43:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:50.137 18:43:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:28:50.562 [2024-07-15 18:43:35.854317] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:50.562 18:43:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:50.562 "name": "Existed_Raid", 00:28:50.562 "aliases": [ 00:28:50.562 "44f44074-0cf9-46af-bb6d-351293e561ac" 00:28:50.562 ], 00:28:50.562 "product_name": "Raid Volume", 00:28:50.562 "block_size": 4128, 00:28:50.562 "num_blocks": 7936, 00:28:50.562 "uuid": "44f44074-0cf9-46af-bb6d-351293e561ac", 00:28:50.562 "md_size": 32, 00:28:50.562 "md_interleave": true, 00:28:50.562 "dif_type": 0, 00:28:50.562 "assigned_rate_limits": { 00:28:50.562 "rw_ios_per_sec": 0, 00:28:50.562 "rw_mbytes_per_sec": 0, 00:28:50.562 "r_mbytes_per_sec": 0, 00:28:50.562 "w_mbytes_per_sec": 0 00:28:50.562 }, 00:28:50.562 "claimed": false, 00:28:50.562 "zoned": false, 00:28:50.562 "supported_io_types": { 00:28:50.562 "read": true, 00:28:50.562 "write": true, 00:28:50.562 "unmap": false, 00:28:50.562 "flush": false, 00:28:50.562 "reset": true, 00:28:50.562 "nvme_admin": false, 00:28:50.562 "nvme_io": false, 00:28:50.562 "nvme_io_md": false, 00:28:50.562 "write_zeroes": true, 00:28:50.562 "zcopy": false, 00:28:50.562 "get_zone_info": false, 00:28:50.562 "zone_management": false, 00:28:50.562 "zone_append": false, 00:28:50.562 "compare": false, 00:28:50.562 "compare_and_write": false, 00:28:50.562 "abort": false, 00:28:50.562 "seek_hole": false, 00:28:50.562 "seek_data": false, 00:28:50.562 "copy": false, 00:28:50.562 "nvme_iov_md": false 00:28:50.562 }, 00:28:50.562 "memory_domains": [ 00:28:50.562 { 00:28:50.562 "dma_device_id": "system", 00:28:50.562 "dma_device_type": 1 00:28:50.562 }, 00:28:50.562 { 00:28:50.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:50.562 "dma_device_type": 2 00:28:50.562 }, 00:28:50.562 { 00:28:50.562 "dma_device_id": "system", 00:28:50.562 "dma_device_type": 1 00:28:50.562 }, 00:28:50.562 { 00:28:50.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:50.562 "dma_device_type": 2 00:28:50.562 } 00:28:50.562 ], 00:28:50.562 "driver_specific": { 00:28:50.562 "raid": { 00:28:50.562 "uuid": "44f44074-0cf9-46af-bb6d-351293e561ac", 00:28:50.562 "strip_size_kb": 0, 00:28:50.562 "state": "online", 00:28:50.562 "raid_level": "raid1", 00:28:50.562 "superblock": true, 00:28:50.562 "num_base_bdevs": 2, 00:28:50.562 "num_base_bdevs_discovered": 2, 00:28:50.562 "num_base_bdevs_operational": 2, 00:28:50.562 "base_bdevs_list": [ 00:28:50.562 { 00:28:50.562 "name": "BaseBdev1", 00:28:50.562 "uuid": "6adb4344-8435-4d24-add0-6f5c4ed91df5", 00:28:50.562 "is_configured": true, 00:28:50.562 "data_offset": 256, 00:28:50.562 "data_size": 7936 00:28:50.562 }, 00:28:50.562 { 00:28:50.562 "name": "BaseBdev2", 00:28:50.562 "uuid": "0b3ad2cc-0b28-4fcd-a204-386398e56476", 00:28:50.562 "is_configured": true, 00:28:50.563 "data_offset": 256, 00:28:50.563 "data_size": 7936 00:28:50.563 } 00:28:50.563 ] 00:28:50.563 } 00:28:50.563 } 00:28:50.563 }' 00:28:50.563 18:43:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:50.563 18:43:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:28:50.563 BaseBdev2' 00:28:50.563 18:43:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:50.563 18:43:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:28:50.563 18:43:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:51.186 18:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:51.186 "name": "BaseBdev1", 00:28:51.186 "aliases": [ 00:28:51.186 "6adb4344-8435-4d24-add0-6f5c4ed91df5" 00:28:51.186 ], 00:28:51.186 "product_name": "Malloc disk", 00:28:51.186 "block_size": 4128, 00:28:51.186 "num_blocks": 8192, 00:28:51.186 "uuid": "6adb4344-8435-4d24-add0-6f5c4ed91df5", 00:28:51.186 "md_size": 32, 00:28:51.186 "md_interleave": true, 00:28:51.186 "dif_type": 0, 00:28:51.186 "assigned_rate_limits": { 00:28:51.186 "rw_ios_per_sec": 0, 00:28:51.186 "rw_mbytes_per_sec": 0, 00:28:51.186 "r_mbytes_per_sec": 0, 00:28:51.186 "w_mbytes_per_sec": 0 00:28:51.186 }, 00:28:51.186 "claimed": true, 00:28:51.186 "claim_type": "exclusive_write", 00:28:51.186 "zoned": false, 00:28:51.186 "supported_io_types": { 00:28:51.186 "read": true, 00:28:51.186 "write": true, 00:28:51.186 "unmap": true, 00:28:51.186 "flush": true, 00:28:51.186 "reset": true, 00:28:51.186 "nvme_admin": false, 00:28:51.186 "nvme_io": false, 00:28:51.186 "nvme_io_md": false, 00:28:51.186 "write_zeroes": true, 00:28:51.186 "zcopy": true, 00:28:51.186 "get_zone_info": false, 00:28:51.186 "zone_management": false, 00:28:51.186 "zone_append": false, 00:28:51.186 "compare": false, 00:28:51.186 "compare_and_write": false, 00:28:51.186 "abort": true, 00:28:51.186 "seek_hole": false, 00:28:51.186 "seek_data": false, 00:28:51.186 "copy": true, 00:28:51.186 "nvme_iov_md": false 00:28:51.186 }, 00:28:51.186 "memory_domains": [ 00:28:51.186 { 00:28:51.186 "dma_device_id": "system", 00:28:51.186 "dma_device_type": 1 00:28:51.186 }, 00:28:51.186 { 00:28:51.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:51.186 "dma_device_type": 2 00:28:51.186 } 00:28:51.186 ], 00:28:51.186 "driver_specific": {} 00:28:51.186 }' 00:28:51.186 18:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:51.186 18:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:51.186 18:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:28:51.186 18:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:51.186 18:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:51.442 18:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:51.442 18:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:51.442 18:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:51.442 18:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:28:51.442 18:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:51.442 18:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:51.700 18:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:51.700 18:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:51.700 18:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:28:51.700 18:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:52.266 18:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:52.266 "name": "BaseBdev2", 00:28:52.266 "aliases": [ 00:28:52.266 "0b3ad2cc-0b28-4fcd-a204-386398e56476" 00:28:52.266 ], 00:28:52.266 "product_name": "Malloc disk", 00:28:52.266 "block_size": 4128, 00:28:52.266 "num_blocks": 8192, 00:28:52.266 "uuid": "0b3ad2cc-0b28-4fcd-a204-386398e56476", 00:28:52.266 "md_size": 32, 00:28:52.266 "md_interleave": true, 00:28:52.266 "dif_type": 0, 00:28:52.266 "assigned_rate_limits": { 00:28:52.266 "rw_ios_per_sec": 0, 00:28:52.266 "rw_mbytes_per_sec": 0, 00:28:52.266 "r_mbytes_per_sec": 0, 00:28:52.266 "w_mbytes_per_sec": 0 00:28:52.266 }, 00:28:52.266 "claimed": true, 00:28:52.266 "claim_type": "exclusive_write", 00:28:52.266 "zoned": false, 00:28:52.266 "supported_io_types": { 00:28:52.266 "read": true, 00:28:52.266 "write": true, 00:28:52.266 "unmap": true, 00:28:52.266 "flush": true, 00:28:52.266 "reset": true, 00:28:52.266 "nvme_admin": false, 00:28:52.266 "nvme_io": false, 00:28:52.266 "nvme_io_md": false, 00:28:52.266 "write_zeroes": true, 00:28:52.266 "zcopy": true, 00:28:52.266 "get_zone_info": false, 00:28:52.266 "zone_management": false, 00:28:52.266 "zone_append": false, 00:28:52.266 "compare": false, 00:28:52.266 "compare_and_write": false, 00:28:52.266 "abort": true, 00:28:52.266 "seek_hole": false, 00:28:52.266 "seek_data": false, 00:28:52.266 "copy": true, 00:28:52.266 "nvme_iov_md": false 00:28:52.266 }, 00:28:52.266 "memory_domains": [ 00:28:52.266 { 00:28:52.266 "dma_device_id": "system", 00:28:52.266 "dma_device_type": 1 00:28:52.266 }, 00:28:52.266 { 00:28:52.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:52.266 "dma_device_type": 2 00:28:52.266 } 00:28:52.266 ], 00:28:52.266 "driver_specific": {} 00:28:52.266 }' 00:28:52.266 18:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:52.266 18:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:52.266 18:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:28:52.266 18:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:52.266 18:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:52.266 18:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:52.266 18:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:52.266 18:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:52.524 18:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:28:52.524 18:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:52.524 18:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:52.524 18:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:52.524 18:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:28:53.090 [2024-07-15 18:43:38.396908] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:53.090 18:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:28:53.090 18:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:28:53.090 18:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:28:53.090 18:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:28:53.090 18:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:28:53.090 18:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:28:53.090 18:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:53.090 18:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:53.090 18:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:53.090 18:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:53.090 18:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:53.090 18:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:53.090 18:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:53.090 18:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:53.090 18:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:53.090 18:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:53.090 18:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:53.349 18:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:53.349 "name": "Existed_Raid", 00:28:53.349 "uuid": "44f44074-0cf9-46af-bb6d-351293e561ac", 00:28:53.349 "strip_size_kb": 0, 00:28:53.349 "state": "online", 00:28:53.349 "raid_level": "raid1", 00:28:53.349 "superblock": true, 00:28:53.349 "num_base_bdevs": 2, 00:28:53.349 "num_base_bdevs_discovered": 1, 00:28:53.349 "num_base_bdevs_operational": 1, 00:28:53.349 "base_bdevs_list": [ 00:28:53.349 { 00:28:53.349 "name": null, 00:28:53.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:53.349 "is_configured": false, 00:28:53.349 "data_offset": 256, 00:28:53.349 "data_size": 7936 00:28:53.349 }, 00:28:53.349 { 00:28:53.349 "name": "BaseBdev2", 00:28:53.349 "uuid": "0b3ad2cc-0b28-4fcd-a204-386398e56476", 00:28:53.349 "is_configured": true, 00:28:53.349 "data_offset": 256, 00:28:53.349 "data_size": 7936 00:28:53.349 } 00:28:53.349 ] 00:28:53.349 }' 00:28:53.349 18:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:53.349 18:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:53.915 18:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:28:53.915 18:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:53.915 18:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:53.915 18:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:28:54.173 18:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:28:54.173 18:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:54.173 18:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:28:54.740 [2024-07-15 18:43:40.030449] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:54.740 [2024-07-15 18:43:40.030539] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:54.740 [2024-07-15 18:43:40.041665] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:54.740 [2024-07-15 18:43:40.041698] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:54.740 [2024-07-15 18:43:40.041706] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16fbc80 name Existed_Raid, state offline 00:28:54.740 18:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:28:54.740 18:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:54.740 18:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:54.740 18:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:28:54.999 18:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:28:54.999 18:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:28:54.999 18:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:28:54.999 18:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 2949340 00:28:54.999 18:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 2949340 ']' 00:28:54.999 18:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 2949340 00:28:54.999 18:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:28:54.999 18:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:54.999 18:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2949340 00:28:54.999 18:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:54.999 18:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:54.999 18:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2949340' 00:28:54.999 killing process with pid 2949340 00:28:54.999 18:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 2949340 00:28:54.999 [2024-07-15 18:43:40.374422] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:54.999 18:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 2949340 00:28:54.999 [2024-07-15 18:43:40.375285] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:55.258 18:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:28:55.258 00:28:55.258 real 0m14.703s 00:28:55.258 user 0m27.135s 00:28:55.258 sys 0m1.938s 00:28:55.258 18:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:55.258 18:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:55.258 ************************************ 00:28:55.258 END TEST raid_state_function_test_sb_md_interleaved 00:28:55.258 ************************************ 00:28:55.258 18:43:40 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:28:55.258 18:43:40 bdev_raid -- bdev/bdev_raid.sh@913 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:28:55.258 18:43:40 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:28:55.258 18:43:40 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:55.258 18:43:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:55.258 ************************************ 00:28:55.258 START TEST raid_superblock_test_md_interleaved 00:28:55.258 ************************************ 00:28:55.258 18:43:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:28:55.258 18:43:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:28:55.258 18:43:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:28:55.259 18:43:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:28:55.259 18:43:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:28:55.259 18:43:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:28:55.259 18:43:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:28:55.259 18:43:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:28:55.259 18:43:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:28:55.259 18:43:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:28:55.259 18:43:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local strip_size 00:28:55.259 18:43:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:28:55.259 18:43:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:28:55.259 18:43:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:28:55.259 18:43:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:28:55.259 18:43:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:28:55.259 18:43:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # raid_pid=2951767 00:28:55.259 18:43:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # waitforlisten 2951767 /var/tmp/spdk-raid.sock 00:28:55.259 18:43:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:28:55.259 18:43:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 2951767 ']' 00:28:55.259 18:43:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:55.259 18:43:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:55.259 18:43:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:55.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:55.259 18:43:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:55.259 18:43:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:55.259 [2024-07-15 18:43:40.709168] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:28:55.259 [2024-07-15 18:43:40.709279] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2951767 ] 00:28:55.517 [2024-07-15 18:43:40.845676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.517 [2024-07-15 18:43:40.939899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:55.517 [2024-07-15 18:43:40.999770] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:55.517 [2024-07-15 18:43:40.999803] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:56.452 18:43:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:56.452 18:43:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:28:56.452 18:43:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:28:56.452 18:43:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:28:56.453 18:43:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:28:56.453 18:43:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:28:56.453 18:43:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:28:56.453 18:43:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:56.453 18:43:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:28:56.453 18:43:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:56.453 18:43:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:28:56.453 malloc1 00:28:56.453 18:43:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:56.712 [2024-07-15 18:43:42.145281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:56.712 [2024-07-15 18:43:42.145327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:56.712 [2024-07-15 18:43:42.145345] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x189f770 00:28:56.712 [2024-07-15 18:43:42.145354] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:56.712 [2024-07-15 18:43:42.146902] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:56.712 [2024-07-15 18:43:42.146930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:56.712 pt1 00:28:56.712 18:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:28:56.712 18:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:28:56.712 18:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:28:56.712 18:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:28:56.712 18:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:28:56.712 18:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:56.712 18:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:28:56.712 18:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:56.712 18:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:28:56.971 malloc2 00:28:56.971 18:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:57.230 [2024-07-15 18:43:42.663401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:57.230 [2024-07-15 18:43:42.663446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:57.230 [2024-07-15 18:43:42.663461] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a2cf10 00:28:57.230 [2024-07-15 18:43:42.663470] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:57.230 [2024-07-15 18:43:42.664873] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:57.230 [2024-07-15 18:43:42.664899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:57.230 pt2 00:28:57.230 18:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:28:57.230 18:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:28:57.230 18:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:28:57.489 [2024-07-15 18:43:42.916085] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:57.489 [2024-07-15 18:43:42.917571] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:57.489 [2024-07-15 18:43:42.917719] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1a2e880 00:28:57.489 [2024-07-15 18:43:42.917731] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:57.489 [2024-07-15 18:43:42.917796] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x189d960 00:28:57.489 [2024-07-15 18:43:42.917881] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1a2e880 00:28:57.489 [2024-07-15 18:43:42.917889] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1a2e880 00:28:57.489 [2024-07-15 18:43:42.917947] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:57.489 18:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:57.489 18:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:57.489 18:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:57.489 18:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:57.489 18:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:57.489 18:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:57.489 18:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:57.489 18:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:57.489 18:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:57.489 18:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:57.489 18:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:57.489 18:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:57.747 18:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:57.747 "name": "raid_bdev1", 00:28:57.747 "uuid": "a7b671e5-3ecf-4ef6-b76e-416eafc0ba52", 00:28:57.747 "strip_size_kb": 0, 00:28:57.747 "state": "online", 00:28:57.747 "raid_level": "raid1", 00:28:57.747 "superblock": true, 00:28:57.747 "num_base_bdevs": 2, 00:28:57.747 "num_base_bdevs_discovered": 2, 00:28:57.747 "num_base_bdevs_operational": 2, 00:28:57.747 "base_bdevs_list": [ 00:28:57.747 { 00:28:57.747 "name": "pt1", 00:28:57.747 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:57.747 "is_configured": true, 00:28:57.747 "data_offset": 256, 00:28:57.747 "data_size": 7936 00:28:57.747 }, 00:28:57.747 { 00:28:57.747 "name": "pt2", 00:28:57.747 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:57.747 "is_configured": true, 00:28:57.747 "data_offset": 256, 00:28:57.747 "data_size": 7936 00:28:57.747 } 00:28:57.747 ] 00:28:57.747 }' 00:28:57.747 18:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:57.747 18:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:58.313 18:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:28:58.313 18:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:28:58.313 18:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:58.313 18:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:58.313 18:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:58.313 18:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:28:58.313 18:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:58.313 18:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:58.572 [2024-07-15 18:43:44.023460] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:58.572 18:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:58.572 "name": "raid_bdev1", 00:28:58.572 "aliases": [ 00:28:58.572 "a7b671e5-3ecf-4ef6-b76e-416eafc0ba52" 00:28:58.572 ], 00:28:58.572 "product_name": "Raid Volume", 00:28:58.572 "block_size": 4128, 00:28:58.572 "num_blocks": 7936, 00:28:58.572 "uuid": "a7b671e5-3ecf-4ef6-b76e-416eafc0ba52", 00:28:58.572 "md_size": 32, 00:28:58.572 "md_interleave": true, 00:28:58.572 "dif_type": 0, 00:28:58.572 "assigned_rate_limits": { 00:28:58.572 "rw_ios_per_sec": 0, 00:28:58.572 "rw_mbytes_per_sec": 0, 00:28:58.572 "r_mbytes_per_sec": 0, 00:28:58.572 "w_mbytes_per_sec": 0 00:28:58.572 }, 00:28:58.572 "claimed": false, 00:28:58.572 "zoned": false, 00:28:58.572 "supported_io_types": { 00:28:58.572 "read": true, 00:28:58.572 "write": true, 00:28:58.572 "unmap": false, 00:28:58.572 "flush": false, 00:28:58.572 "reset": true, 00:28:58.572 "nvme_admin": false, 00:28:58.572 "nvme_io": false, 00:28:58.572 "nvme_io_md": false, 00:28:58.572 "write_zeroes": true, 00:28:58.572 "zcopy": false, 00:28:58.572 "get_zone_info": false, 00:28:58.572 "zone_management": false, 00:28:58.572 "zone_append": false, 00:28:58.572 "compare": false, 00:28:58.572 "compare_and_write": false, 00:28:58.572 "abort": false, 00:28:58.572 "seek_hole": false, 00:28:58.572 "seek_data": false, 00:28:58.572 "copy": false, 00:28:58.572 "nvme_iov_md": false 00:28:58.572 }, 00:28:58.572 "memory_domains": [ 00:28:58.572 { 00:28:58.572 "dma_device_id": "system", 00:28:58.572 "dma_device_type": 1 00:28:58.572 }, 00:28:58.572 { 00:28:58.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:58.572 "dma_device_type": 2 00:28:58.572 }, 00:28:58.572 { 00:28:58.572 "dma_device_id": "system", 00:28:58.572 "dma_device_type": 1 00:28:58.572 }, 00:28:58.572 { 00:28:58.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:58.572 "dma_device_type": 2 00:28:58.572 } 00:28:58.572 ], 00:28:58.572 "driver_specific": { 00:28:58.572 "raid": { 00:28:58.572 "uuid": "a7b671e5-3ecf-4ef6-b76e-416eafc0ba52", 00:28:58.572 "strip_size_kb": 0, 00:28:58.572 "state": "online", 00:28:58.572 "raid_level": "raid1", 00:28:58.572 "superblock": true, 00:28:58.572 "num_base_bdevs": 2, 00:28:58.572 "num_base_bdevs_discovered": 2, 00:28:58.572 "num_base_bdevs_operational": 2, 00:28:58.572 "base_bdevs_list": [ 00:28:58.572 { 00:28:58.572 "name": "pt1", 00:28:58.572 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:58.572 "is_configured": true, 00:28:58.572 "data_offset": 256, 00:28:58.572 "data_size": 7936 00:28:58.572 }, 00:28:58.572 { 00:28:58.572 "name": "pt2", 00:28:58.572 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:58.572 "is_configured": true, 00:28:58.572 "data_offset": 256, 00:28:58.572 "data_size": 7936 00:28:58.572 } 00:28:58.572 ] 00:28:58.572 } 00:28:58.572 } 00:28:58.572 }' 00:28:58.572 18:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:58.572 18:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:28:58.572 pt2' 00:28:58.572 18:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:58.572 18:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:28:58.572 18:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:58.831 18:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:58.831 "name": "pt1", 00:28:58.831 "aliases": [ 00:28:58.831 "00000000-0000-0000-0000-000000000001" 00:28:58.831 ], 00:28:58.831 "product_name": "passthru", 00:28:58.831 "block_size": 4128, 00:28:58.831 "num_blocks": 8192, 00:28:58.831 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:58.831 "md_size": 32, 00:28:58.831 "md_interleave": true, 00:28:58.831 "dif_type": 0, 00:28:58.831 "assigned_rate_limits": { 00:28:58.831 "rw_ios_per_sec": 0, 00:28:58.831 "rw_mbytes_per_sec": 0, 00:28:58.831 "r_mbytes_per_sec": 0, 00:28:58.831 "w_mbytes_per_sec": 0 00:28:58.831 }, 00:28:58.831 "claimed": true, 00:28:58.831 "claim_type": "exclusive_write", 00:28:58.831 "zoned": false, 00:28:58.831 "supported_io_types": { 00:28:58.831 "read": true, 00:28:58.831 "write": true, 00:28:58.831 "unmap": true, 00:28:58.831 "flush": true, 00:28:58.831 "reset": true, 00:28:58.831 "nvme_admin": false, 00:28:58.831 "nvme_io": false, 00:28:58.831 "nvme_io_md": false, 00:28:58.831 "write_zeroes": true, 00:28:58.831 "zcopy": true, 00:28:58.831 "get_zone_info": false, 00:28:58.831 "zone_management": false, 00:28:58.831 "zone_append": false, 00:28:58.831 "compare": false, 00:28:58.831 "compare_and_write": false, 00:28:58.831 "abort": true, 00:28:58.831 "seek_hole": false, 00:28:58.831 "seek_data": false, 00:28:58.831 "copy": true, 00:28:58.831 "nvme_iov_md": false 00:28:58.831 }, 00:28:58.831 "memory_domains": [ 00:28:58.831 { 00:28:58.831 "dma_device_id": "system", 00:28:58.831 "dma_device_type": 1 00:28:58.831 }, 00:28:58.831 { 00:28:58.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:58.831 "dma_device_type": 2 00:28:58.831 } 00:28:58.831 ], 00:28:58.831 "driver_specific": { 00:28:58.831 "passthru": { 00:28:58.831 "name": "pt1", 00:28:58.831 "base_bdev_name": "malloc1" 00:28:58.831 } 00:28:58.831 } 00:28:58.831 }' 00:28:58.831 18:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:59.090 18:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:59.090 18:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:28:59.090 18:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:59.090 18:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:59.090 18:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:59.090 18:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:59.090 18:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:59.090 18:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:28:59.090 18:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:59.348 18:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:59.348 18:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:59.348 18:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:59.348 18:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:28:59.348 18:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:59.607 18:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:59.607 "name": "pt2", 00:28:59.607 "aliases": [ 00:28:59.607 "00000000-0000-0000-0000-000000000002" 00:28:59.607 ], 00:28:59.607 "product_name": "passthru", 00:28:59.607 "block_size": 4128, 00:28:59.607 "num_blocks": 8192, 00:28:59.607 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:59.607 "md_size": 32, 00:28:59.607 "md_interleave": true, 00:28:59.607 "dif_type": 0, 00:28:59.607 "assigned_rate_limits": { 00:28:59.607 "rw_ios_per_sec": 0, 00:28:59.607 "rw_mbytes_per_sec": 0, 00:28:59.607 "r_mbytes_per_sec": 0, 00:28:59.607 "w_mbytes_per_sec": 0 00:28:59.607 }, 00:28:59.607 "claimed": true, 00:28:59.607 "claim_type": "exclusive_write", 00:28:59.607 "zoned": false, 00:28:59.607 "supported_io_types": { 00:28:59.607 "read": true, 00:28:59.607 "write": true, 00:28:59.607 "unmap": true, 00:28:59.607 "flush": true, 00:28:59.607 "reset": true, 00:28:59.607 "nvme_admin": false, 00:28:59.607 "nvme_io": false, 00:28:59.607 "nvme_io_md": false, 00:28:59.607 "write_zeroes": true, 00:28:59.607 "zcopy": true, 00:28:59.607 "get_zone_info": false, 00:28:59.607 "zone_management": false, 00:28:59.607 "zone_append": false, 00:28:59.607 "compare": false, 00:28:59.607 "compare_and_write": false, 00:28:59.607 "abort": true, 00:28:59.607 "seek_hole": false, 00:28:59.607 "seek_data": false, 00:28:59.607 "copy": true, 00:28:59.607 "nvme_iov_md": false 00:28:59.607 }, 00:28:59.607 "memory_domains": [ 00:28:59.608 { 00:28:59.608 "dma_device_id": "system", 00:28:59.608 "dma_device_type": 1 00:28:59.608 }, 00:28:59.608 { 00:28:59.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:59.608 "dma_device_type": 2 00:28:59.608 } 00:28:59.608 ], 00:28:59.608 "driver_specific": { 00:28:59.608 "passthru": { 00:28:59.608 "name": "pt2", 00:28:59.608 "base_bdev_name": "malloc2" 00:28:59.608 } 00:28:59.608 } 00:28:59.608 }' 00:28:59.608 18:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:59.608 18:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:59.608 18:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:28:59.608 18:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:59.608 18:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:59.867 18:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:59.867 18:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:59.867 18:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:59.867 18:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:28:59.867 18:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:59.867 18:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:59.867 18:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:59.867 18:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:59.867 18:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:29:00.126 [2024-07-15 18:43:45.591695] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:00.126 18:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=a7b671e5-3ecf-4ef6-b76e-416eafc0ba52 00:29:00.126 18:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # '[' -z a7b671e5-3ecf-4ef6-b76e-416eafc0ba52 ']' 00:29:00.126 18:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:00.385 [2024-07-15 18:43:45.844112] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:00.385 [2024-07-15 18:43:45.844134] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:00.385 [2024-07-15 18:43:45.844189] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:00.385 [2024-07-15 18:43:45.844240] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:00.385 [2024-07-15 18:43:45.844249] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a2e880 name raid_bdev1, state offline 00:29:00.385 18:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:00.385 18:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:29:00.643 18:43:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:29:00.643 18:43:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:29:00.643 18:43:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:29:00.643 18:43:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:29:00.902 18:43:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:29:00.902 18:43:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:29:01.161 18:43:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:29:01.161 18:43:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:29:01.419 18:43:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:29:01.419 18:43:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:29:01.419 18:43:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:29:01.419 18:43:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:29:01.419 18:43:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:01.419 18:43:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:01.419 18:43:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:01.419 18:43:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:01.419 18:43:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:01.419 18:43:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:01.419 18:43:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:01.419 18:43:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:29:01.419 18:43:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:29:01.678 [2024-07-15 18:43:47.107457] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:29:01.678 [2024-07-15 18:43:47.108875] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:29:01.678 [2024-07-15 18:43:47.108931] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:29:01.678 [2024-07-15 18:43:47.108977] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:29:01.678 [2024-07-15 18:43:47.108993] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:01.678 [2024-07-15 18:43:47.109001] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a2bdc0 name raid_bdev1, state configuring 00:29:01.678 request: 00:29:01.678 { 00:29:01.678 "name": "raid_bdev1", 00:29:01.678 "raid_level": "raid1", 00:29:01.678 "base_bdevs": [ 00:29:01.678 "malloc1", 00:29:01.678 "malloc2" 00:29:01.678 ], 00:29:01.678 "superblock": false, 00:29:01.678 "method": "bdev_raid_create", 00:29:01.678 "req_id": 1 00:29:01.678 } 00:29:01.678 Got JSON-RPC error response 00:29:01.678 response: 00:29:01.678 { 00:29:01.678 "code": -17, 00:29:01.678 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:29:01.678 } 00:29:01.678 18:43:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:29:01.678 18:43:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:01.678 18:43:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:01.678 18:43:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:01.678 18:43:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:01.678 18:43:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:29:01.936 18:43:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:29:01.936 18:43:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:29:01.936 18:43:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:02.195 [2024-07-15 18:43:47.604728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:02.195 [2024-07-15 18:43:47.604770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:02.195 [2024-07-15 18:43:47.604786] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x189d760 00:29:02.195 [2024-07-15 18:43:47.604796] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:02.195 [2024-07-15 18:43:47.606271] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:02.195 [2024-07-15 18:43:47.606299] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:02.195 [2024-07-15 18:43:47.606342] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:29:02.195 [2024-07-15 18:43:47.606365] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:02.195 pt1 00:29:02.195 18:43:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:29:02.195 18:43:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:02.195 18:43:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:02.195 18:43:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:02.195 18:43:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:02.195 18:43:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:02.195 18:43:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:02.195 18:43:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:02.195 18:43:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:02.195 18:43:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:02.195 18:43:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:02.195 18:43:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:02.453 18:43:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:02.453 "name": "raid_bdev1", 00:29:02.453 "uuid": "a7b671e5-3ecf-4ef6-b76e-416eafc0ba52", 00:29:02.453 "strip_size_kb": 0, 00:29:02.453 "state": "configuring", 00:29:02.453 "raid_level": "raid1", 00:29:02.453 "superblock": true, 00:29:02.453 "num_base_bdevs": 2, 00:29:02.453 "num_base_bdevs_discovered": 1, 00:29:02.453 "num_base_bdevs_operational": 2, 00:29:02.453 "base_bdevs_list": [ 00:29:02.453 { 00:29:02.453 "name": "pt1", 00:29:02.453 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:02.453 "is_configured": true, 00:29:02.453 "data_offset": 256, 00:29:02.453 "data_size": 7936 00:29:02.453 }, 00:29:02.453 { 00:29:02.453 "name": null, 00:29:02.453 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:02.453 "is_configured": false, 00:29:02.453 "data_offset": 256, 00:29:02.453 "data_size": 7936 00:29:02.453 } 00:29:02.453 ] 00:29:02.453 }' 00:29:02.453 18:43:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:02.453 18:43:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:03.019 18:43:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:29:03.019 18:43:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:29:03.019 18:43:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:29:03.019 18:43:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:03.278 [2024-07-15 18:43:48.755858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:03.278 [2024-07-15 18:43:48.755905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:03.278 [2024-07-15 18:43:48.755920] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a31b90 00:29:03.278 [2024-07-15 18:43:48.755929] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:03.278 [2024-07-15 18:43:48.756095] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:03.278 [2024-07-15 18:43:48.756110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:03.278 [2024-07-15 18:43:48.756153] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:03.278 [2024-07-15 18:43:48.756170] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:03.278 [2024-07-15 18:43:48.756251] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1a302a0 00:29:03.278 [2024-07-15 18:43:48.756261] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:29:03.278 [2024-07-15 18:43:48.756316] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a312a0 00:29:03.278 [2024-07-15 18:43:48.756397] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1a302a0 00:29:03.278 [2024-07-15 18:43:48.756405] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1a302a0 00:29:03.278 [2024-07-15 18:43:48.756461] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:03.278 pt2 00:29:03.278 18:43:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:29:03.278 18:43:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:29:03.278 18:43:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:03.278 18:43:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:03.278 18:43:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:03.278 18:43:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:03.278 18:43:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:03.278 18:43:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:03.278 18:43:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:03.278 18:43:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:03.278 18:43:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:03.278 18:43:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:03.278 18:43:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:03.278 18:43:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:03.537 18:43:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:03.537 "name": "raid_bdev1", 00:29:03.537 "uuid": "a7b671e5-3ecf-4ef6-b76e-416eafc0ba52", 00:29:03.537 "strip_size_kb": 0, 00:29:03.537 "state": "online", 00:29:03.537 "raid_level": "raid1", 00:29:03.537 "superblock": true, 00:29:03.537 "num_base_bdevs": 2, 00:29:03.537 "num_base_bdevs_discovered": 2, 00:29:03.537 "num_base_bdevs_operational": 2, 00:29:03.537 "base_bdevs_list": [ 00:29:03.537 { 00:29:03.537 "name": "pt1", 00:29:03.537 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:03.537 "is_configured": true, 00:29:03.537 "data_offset": 256, 00:29:03.537 "data_size": 7936 00:29:03.537 }, 00:29:03.537 { 00:29:03.537 "name": "pt2", 00:29:03.537 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:03.537 "is_configured": true, 00:29:03.537 "data_offset": 256, 00:29:03.537 "data_size": 7936 00:29:03.537 } 00:29:03.537 ] 00:29:03.537 }' 00:29:03.537 18:43:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:03.537 18:43:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:04.471 18:43:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:29:04.471 18:43:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:29:04.471 18:43:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:29:04.471 18:43:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:29:04.471 18:43:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:29:04.471 18:43:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:29:04.471 18:43:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:04.471 18:43:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:29:04.471 [2024-07-15 18:43:49.899222] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:04.471 18:43:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:29:04.471 "name": "raid_bdev1", 00:29:04.471 "aliases": [ 00:29:04.471 "a7b671e5-3ecf-4ef6-b76e-416eafc0ba52" 00:29:04.471 ], 00:29:04.471 "product_name": "Raid Volume", 00:29:04.471 "block_size": 4128, 00:29:04.471 "num_blocks": 7936, 00:29:04.471 "uuid": "a7b671e5-3ecf-4ef6-b76e-416eafc0ba52", 00:29:04.471 "md_size": 32, 00:29:04.471 "md_interleave": true, 00:29:04.471 "dif_type": 0, 00:29:04.471 "assigned_rate_limits": { 00:29:04.471 "rw_ios_per_sec": 0, 00:29:04.471 "rw_mbytes_per_sec": 0, 00:29:04.471 "r_mbytes_per_sec": 0, 00:29:04.471 "w_mbytes_per_sec": 0 00:29:04.471 }, 00:29:04.471 "claimed": false, 00:29:04.471 "zoned": false, 00:29:04.471 "supported_io_types": { 00:29:04.471 "read": true, 00:29:04.471 "write": true, 00:29:04.471 "unmap": false, 00:29:04.471 "flush": false, 00:29:04.471 "reset": true, 00:29:04.471 "nvme_admin": false, 00:29:04.471 "nvme_io": false, 00:29:04.471 "nvme_io_md": false, 00:29:04.471 "write_zeroes": true, 00:29:04.471 "zcopy": false, 00:29:04.471 "get_zone_info": false, 00:29:04.471 "zone_management": false, 00:29:04.471 "zone_append": false, 00:29:04.471 "compare": false, 00:29:04.471 "compare_and_write": false, 00:29:04.471 "abort": false, 00:29:04.471 "seek_hole": false, 00:29:04.471 "seek_data": false, 00:29:04.471 "copy": false, 00:29:04.471 "nvme_iov_md": false 00:29:04.471 }, 00:29:04.471 "memory_domains": [ 00:29:04.471 { 00:29:04.471 "dma_device_id": "system", 00:29:04.471 "dma_device_type": 1 00:29:04.471 }, 00:29:04.471 { 00:29:04.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:04.471 "dma_device_type": 2 00:29:04.471 }, 00:29:04.471 { 00:29:04.471 "dma_device_id": "system", 00:29:04.471 "dma_device_type": 1 00:29:04.471 }, 00:29:04.471 { 00:29:04.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:04.471 "dma_device_type": 2 00:29:04.471 } 00:29:04.471 ], 00:29:04.471 "driver_specific": { 00:29:04.471 "raid": { 00:29:04.471 "uuid": "a7b671e5-3ecf-4ef6-b76e-416eafc0ba52", 00:29:04.471 "strip_size_kb": 0, 00:29:04.471 "state": "online", 00:29:04.471 "raid_level": "raid1", 00:29:04.471 "superblock": true, 00:29:04.471 "num_base_bdevs": 2, 00:29:04.471 "num_base_bdevs_discovered": 2, 00:29:04.471 "num_base_bdevs_operational": 2, 00:29:04.471 "base_bdevs_list": [ 00:29:04.471 { 00:29:04.471 "name": "pt1", 00:29:04.471 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:04.471 "is_configured": true, 00:29:04.471 "data_offset": 256, 00:29:04.471 "data_size": 7936 00:29:04.471 }, 00:29:04.471 { 00:29:04.471 "name": "pt2", 00:29:04.471 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:04.471 "is_configured": true, 00:29:04.471 "data_offset": 256, 00:29:04.471 "data_size": 7936 00:29:04.471 } 00:29:04.471 ] 00:29:04.471 } 00:29:04.471 } 00:29:04.471 }' 00:29:04.471 18:43:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:04.471 18:43:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:29:04.471 pt2' 00:29:04.471 18:43:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:04.471 18:43:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:29:04.471 18:43:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:04.730 18:43:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:04.730 "name": "pt1", 00:29:04.730 "aliases": [ 00:29:04.730 "00000000-0000-0000-0000-000000000001" 00:29:04.730 ], 00:29:04.730 "product_name": "passthru", 00:29:04.730 "block_size": 4128, 00:29:04.730 "num_blocks": 8192, 00:29:04.730 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:04.730 "md_size": 32, 00:29:04.730 "md_interleave": true, 00:29:04.730 "dif_type": 0, 00:29:04.730 "assigned_rate_limits": { 00:29:04.730 "rw_ios_per_sec": 0, 00:29:04.730 "rw_mbytes_per_sec": 0, 00:29:04.730 "r_mbytes_per_sec": 0, 00:29:04.730 "w_mbytes_per_sec": 0 00:29:04.730 }, 00:29:04.730 "claimed": true, 00:29:04.730 "claim_type": "exclusive_write", 00:29:04.730 "zoned": false, 00:29:04.730 "supported_io_types": { 00:29:04.730 "read": true, 00:29:04.730 "write": true, 00:29:04.730 "unmap": true, 00:29:04.730 "flush": true, 00:29:04.730 "reset": true, 00:29:04.730 "nvme_admin": false, 00:29:04.730 "nvme_io": false, 00:29:04.730 "nvme_io_md": false, 00:29:04.730 "write_zeroes": true, 00:29:04.730 "zcopy": true, 00:29:04.730 "get_zone_info": false, 00:29:04.730 "zone_management": false, 00:29:04.730 "zone_append": false, 00:29:04.730 "compare": false, 00:29:04.730 "compare_and_write": false, 00:29:04.730 "abort": true, 00:29:04.730 "seek_hole": false, 00:29:04.730 "seek_data": false, 00:29:04.730 "copy": true, 00:29:04.730 "nvme_iov_md": false 00:29:04.730 }, 00:29:04.730 "memory_domains": [ 00:29:04.730 { 00:29:04.730 "dma_device_id": "system", 00:29:04.730 "dma_device_type": 1 00:29:04.730 }, 00:29:04.730 { 00:29:04.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:04.730 "dma_device_type": 2 00:29:04.730 } 00:29:04.730 ], 00:29:04.730 "driver_specific": { 00:29:04.730 "passthru": { 00:29:04.730 "name": "pt1", 00:29:04.730 "base_bdev_name": "malloc1" 00:29:04.730 } 00:29:04.730 } 00:29:04.730 }' 00:29:04.730 18:43:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:04.989 18:43:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:04.989 18:43:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:29:04.989 18:43:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:04.989 18:43:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:04.989 18:43:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:29:04.989 18:43:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:04.989 18:43:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:04.989 18:43:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:29:04.989 18:43:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:05.247 18:43:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:05.247 18:43:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:29:05.247 18:43:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:05.247 18:43:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:29:05.247 18:43:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:05.247 18:43:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:05.247 "name": "pt2", 00:29:05.247 "aliases": [ 00:29:05.247 "00000000-0000-0000-0000-000000000002" 00:29:05.247 ], 00:29:05.247 "product_name": "passthru", 00:29:05.247 "block_size": 4128, 00:29:05.247 "num_blocks": 8192, 00:29:05.247 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:05.247 "md_size": 32, 00:29:05.247 "md_interleave": true, 00:29:05.247 "dif_type": 0, 00:29:05.247 "assigned_rate_limits": { 00:29:05.247 "rw_ios_per_sec": 0, 00:29:05.247 "rw_mbytes_per_sec": 0, 00:29:05.247 "r_mbytes_per_sec": 0, 00:29:05.247 "w_mbytes_per_sec": 0 00:29:05.247 }, 00:29:05.247 "claimed": true, 00:29:05.247 "claim_type": "exclusive_write", 00:29:05.247 "zoned": false, 00:29:05.247 "supported_io_types": { 00:29:05.247 "read": true, 00:29:05.247 "write": true, 00:29:05.247 "unmap": true, 00:29:05.247 "flush": true, 00:29:05.247 "reset": true, 00:29:05.247 "nvme_admin": false, 00:29:05.247 "nvme_io": false, 00:29:05.247 "nvme_io_md": false, 00:29:05.247 "write_zeroes": true, 00:29:05.247 "zcopy": true, 00:29:05.247 "get_zone_info": false, 00:29:05.247 "zone_management": false, 00:29:05.247 "zone_append": false, 00:29:05.247 "compare": false, 00:29:05.247 "compare_and_write": false, 00:29:05.247 "abort": true, 00:29:05.247 "seek_hole": false, 00:29:05.247 "seek_data": false, 00:29:05.247 "copy": true, 00:29:05.247 "nvme_iov_md": false 00:29:05.247 }, 00:29:05.247 "memory_domains": [ 00:29:05.247 { 00:29:05.247 "dma_device_id": "system", 00:29:05.247 "dma_device_type": 1 00:29:05.247 }, 00:29:05.247 { 00:29:05.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:05.247 "dma_device_type": 2 00:29:05.247 } 00:29:05.247 ], 00:29:05.247 "driver_specific": { 00:29:05.247 "passthru": { 00:29:05.247 "name": "pt2", 00:29:05.247 "base_bdev_name": "malloc2" 00:29:05.247 } 00:29:05.247 } 00:29:05.247 }' 00:29:05.247 18:43:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:05.505 18:43:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:05.505 18:43:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:29:05.505 18:43:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:05.505 18:43:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:05.505 18:43:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:29:05.505 18:43:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:05.505 18:43:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:05.505 18:43:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:29:05.505 18:43:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:05.763 18:43:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:05.763 18:43:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:29:05.764 18:43:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:05.764 18:43:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:29:06.022 [2024-07-15 18:43:51.371168] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:06.022 18:43:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # '[' a7b671e5-3ecf-4ef6-b76e-416eafc0ba52 '!=' a7b671e5-3ecf-4ef6-b76e-416eafc0ba52 ']' 00:29:06.022 18:43:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:29:06.022 18:43:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:29:06.022 18:43:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:29:06.022 18:43:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:29:06.281 [2024-07-15 18:43:51.631619] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:29:06.281 18:43:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:06.281 18:43:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:06.281 18:43:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:06.281 18:43:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:06.281 18:43:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:06.281 18:43:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:06.281 18:43:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:06.281 18:43:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:06.281 18:43:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:06.281 18:43:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:06.281 18:43:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:06.281 18:43:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:06.540 18:43:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:06.540 "name": "raid_bdev1", 00:29:06.540 "uuid": "a7b671e5-3ecf-4ef6-b76e-416eafc0ba52", 00:29:06.540 "strip_size_kb": 0, 00:29:06.540 "state": "online", 00:29:06.540 "raid_level": "raid1", 00:29:06.540 "superblock": true, 00:29:06.540 "num_base_bdevs": 2, 00:29:06.540 "num_base_bdevs_discovered": 1, 00:29:06.540 "num_base_bdevs_operational": 1, 00:29:06.540 "base_bdevs_list": [ 00:29:06.540 { 00:29:06.540 "name": null, 00:29:06.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:06.540 "is_configured": false, 00:29:06.540 "data_offset": 256, 00:29:06.540 "data_size": 7936 00:29:06.540 }, 00:29:06.540 { 00:29:06.540 "name": "pt2", 00:29:06.540 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:06.540 "is_configured": true, 00:29:06.540 "data_offset": 256, 00:29:06.540 "data_size": 7936 00:29:06.540 } 00:29:06.540 ] 00:29:06.540 }' 00:29:06.540 18:43:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:06.540 18:43:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:07.107 18:43:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:07.366 [2024-07-15 18:43:52.746604] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:07.366 [2024-07-15 18:43:52.746628] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:07.366 [2024-07-15 18:43:52.746677] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:07.366 [2024-07-15 18:43:52.746719] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:07.366 [2024-07-15 18:43:52.746729] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a302a0 name raid_bdev1, state offline 00:29:07.366 18:43:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:07.366 18:43:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:29:07.624 18:43:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:29:07.624 18:43:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:29:07.624 18:43:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:29:07.624 18:43:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:29:07.624 18:43:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:29:07.883 18:43:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:29:07.883 18:43:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:29:07.883 18:43:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:29:07.883 18:43:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:29:07.883 18:43:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@518 -- # i=1 00:29:07.883 18:43:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:08.141 [2024-07-15 18:43:53.528674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:08.141 [2024-07-15 18:43:53.528715] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:08.141 [2024-07-15 18:43:53.528729] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a31f40 00:29:08.141 [2024-07-15 18:43:53.528738] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:08.141 [2024-07-15 18:43:53.530231] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:08.141 [2024-07-15 18:43:53.530258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:08.141 [2024-07-15 18:43:53.530299] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:08.141 [2024-07-15 18:43:53.530323] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:08.141 [2024-07-15 18:43:53.530388] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1a309c0 00:29:08.141 [2024-07-15 18:43:53.530396] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:29:08.141 [2024-07-15 18:43:53.530456] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a2dab0 00:29:08.141 [2024-07-15 18:43:53.530531] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1a309c0 00:29:08.141 [2024-07-15 18:43:53.530539] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1a309c0 00:29:08.141 [2024-07-15 18:43:53.530593] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:08.141 pt2 00:29:08.141 18:43:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:08.141 18:43:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:08.141 18:43:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:08.142 18:43:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:08.142 18:43:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:08.142 18:43:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:08.142 18:43:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:08.142 18:43:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:08.142 18:43:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:08.142 18:43:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:08.142 18:43:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:08.142 18:43:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:08.400 18:43:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:08.400 "name": "raid_bdev1", 00:29:08.400 "uuid": "a7b671e5-3ecf-4ef6-b76e-416eafc0ba52", 00:29:08.400 "strip_size_kb": 0, 00:29:08.400 "state": "online", 00:29:08.400 "raid_level": "raid1", 00:29:08.400 "superblock": true, 00:29:08.400 "num_base_bdevs": 2, 00:29:08.400 "num_base_bdevs_discovered": 1, 00:29:08.400 "num_base_bdevs_operational": 1, 00:29:08.400 "base_bdevs_list": [ 00:29:08.400 { 00:29:08.400 "name": null, 00:29:08.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:08.400 "is_configured": false, 00:29:08.400 "data_offset": 256, 00:29:08.400 "data_size": 7936 00:29:08.400 }, 00:29:08.400 { 00:29:08.400 "name": "pt2", 00:29:08.400 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:08.400 "is_configured": true, 00:29:08.400 "data_offset": 256, 00:29:08.400 "data_size": 7936 00:29:08.400 } 00:29:08.400 ] 00:29:08.400 }' 00:29:08.400 18:43:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:08.400 18:43:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:08.968 18:43:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:09.226 [2024-07-15 18:43:54.671789] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:09.226 [2024-07-15 18:43:54.671812] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:09.226 [2024-07-15 18:43:54.671862] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:09.226 [2024-07-15 18:43:54.671903] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:09.226 [2024-07-15 18:43:54.671912] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a309c0 name raid_bdev1, state offline 00:29:09.226 18:43:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:09.227 18:43:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:29:09.485 18:43:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:29:09.485 18:43:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:29:09.485 18:43:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:29:09.485 18:43:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:09.745 [2024-07-15 18:43:55.197176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:09.745 [2024-07-15 18:43:55.197217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:09.745 [2024-07-15 18:43:55.197231] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a2d930 00:29:09.745 [2024-07-15 18:43:55.197240] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:09.745 [2024-07-15 18:43:55.198714] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:09.745 [2024-07-15 18:43:55.198739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:09.745 [2024-07-15 18:43:55.198778] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:29:09.745 [2024-07-15 18:43:55.198801] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:09.745 [2024-07-15 18:43:55.198880] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:29:09.745 [2024-07-15 18:43:55.198890] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:09.745 [2024-07-15 18:43:55.198902] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a315d0 name raid_bdev1, state configuring 00:29:09.745 [2024-07-15 18:43:55.198923] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:09.745 [2024-07-15 18:43:55.199008] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x189f9a0 00:29:09.745 [2024-07-15 18:43:55.199019] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:29:09.745 [2024-07-15 18:43:55.199082] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a31750 00:29:09.745 [2024-07-15 18:43:55.199157] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x189f9a0 00:29:09.745 [2024-07-15 18:43:55.199164] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x189f9a0 00:29:09.745 [2024-07-15 18:43:55.199225] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:09.745 pt1 00:29:09.745 18:43:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:29:09.745 18:43:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:09.745 18:43:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:09.745 18:43:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:09.745 18:43:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:09.745 18:43:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:09.745 18:43:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:09.745 18:43:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:09.745 18:43:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:09.745 18:43:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:09.745 18:43:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:09.745 18:43:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:09.745 18:43:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:10.004 18:43:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:10.004 "name": "raid_bdev1", 00:29:10.004 "uuid": "a7b671e5-3ecf-4ef6-b76e-416eafc0ba52", 00:29:10.004 "strip_size_kb": 0, 00:29:10.004 "state": "online", 00:29:10.004 "raid_level": "raid1", 00:29:10.004 "superblock": true, 00:29:10.004 "num_base_bdevs": 2, 00:29:10.004 "num_base_bdevs_discovered": 1, 00:29:10.004 "num_base_bdevs_operational": 1, 00:29:10.004 "base_bdevs_list": [ 00:29:10.004 { 00:29:10.005 "name": null, 00:29:10.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:10.005 "is_configured": false, 00:29:10.005 "data_offset": 256, 00:29:10.005 "data_size": 7936 00:29:10.005 }, 00:29:10.005 { 00:29:10.005 "name": "pt2", 00:29:10.005 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:10.005 "is_configured": true, 00:29:10.005 "data_offset": 256, 00:29:10.005 "data_size": 7936 00:29:10.005 } 00:29:10.005 ] 00:29:10.005 }' 00:29:10.005 18:43:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:10.005 18:43:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:10.941 18:43:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:29:10.941 18:43:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:29:10.942 18:43:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:29:10.942 18:43:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:10.942 18:43:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:29:11.201 [2024-07-15 18:43:56.617250] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:11.201 18:43:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' a7b671e5-3ecf-4ef6-b76e-416eafc0ba52 '!=' a7b671e5-3ecf-4ef6-b76e-416eafc0ba52 ']' 00:29:11.201 18:43:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@562 -- # killprocess 2951767 00:29:11.201 18:43:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 2951767 ']' 00:29:11.201 18:43:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 2951767 00:29:11.201 18:43:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:29:11.201 18:43:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:11.201 18:43:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2951767 00:29:11.201 18:43:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:11.201 18:43:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:11.201 18:43:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2951767' 00:29:11.201 killing process with pid 2951767 00:29:11.201 18:43:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@967 -- # kill 2951767 00:29:11.201 [2024-07-15 18:43:56.681979] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:11.201 [2024-07-15 18:43:56.682033] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:11.201 [2024-07-15 18:43:56.682073] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:11.201 [2024-07-15 18:43:56.682082] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x189f9a0 name raid_bdev1, state offline 00:29:11.201 18:43:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # wait 2951767 00:29:11.201 [2024-07-15 18:43:56.699143] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:11.460 18:43:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@564 -- # return 0 00:29:11.460 00:29:11.460 real 0m16.281s 00:29:11.460 user 0m30.240s 00:29:11.460 sys 0m2.362s 00:29:11.460 18:43:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:11.460 18:43:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:11.460 ************************************ 00:29:11.460 END TEST raid_superblock_test_md_interleaved 00:29:11.460 ************************************ 00:29:11.460 18:43:56 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:29:11.460 18:43:56 bdev_raid -- bdev/bdev_raid.sh@914 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:29:11.460 18:43:56 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:29:11.460 18:43:56 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:11.460 18:43:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:11.460 ************************************ 00:29:11.460 START TEST raid_rebuild_test_sb_md_interleaved 00:29:11.460 ************************************ 00:29:11.460 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false false 00:29:11.460 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:29:11.460 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:29:11.460 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:29:11.460 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:29:11.460 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local verify=false 00:29:11.460 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:29:11.460 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:11.460 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:29:11.460 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:11.460 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:11.460 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:29:11.460 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:11.460 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:11.460 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:29:11.461 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:29:11.461 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:29:11.461 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local strip_size 00:29:11.461 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local create_arg 00:29:11.461 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:29:11.461 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local data_offset 00:29:11.461 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:29:11.461 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:29:11.461 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:29:11.461 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:29:11.461 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # raid_pid=2954543 00:29:11.461 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # waitforlisten 2954543 /var/tmp/spdk-raid.sock 00:29:11.461 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:11.461 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 2954543 ']' 00:29:11.461 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:11.461 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:11.461 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:11.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:11.461 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:11.461 18:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:11.461 [2024-07-15 18:43:57.005422] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:29:11.461 [2024-07-15 18:43:57.005484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2954543 ] 00:29:11.461 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:11.461 Zero copy mechanism will not be used. 00:29:11.720 [2024-07-15 18:43:57.104154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.720 [2024-07-15 18:43:57.201599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.720 [2024-07-15 18:43:57.257659] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:11.720 [2024-07-15 18:43:57.257688] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:11.999 18:43:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:11.999 18:43:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:29:11.999 18:43:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:11.999 18:43:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:29:12.267 BaseBdev1_malloc 00:29:12.267 18:43:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:12.595 [2024-07-15 18:43:57.953730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:12.595 [2024-07-15 18:43:57.953773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:12.595 [2024-07-15 18:43:57.953793] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18f9a90 00:29:12.595 [2024-07-15 18:43:57.953803] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:12.595 [2024-07-15 18:43:57.955348] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:12.595 [2024-07-15 18:43:57.955374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:12.595 BaseBdev1 00:29:12.595 18:43:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:12.595 18:43:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:29:12.872 BaseBdev2_malloc 00:29:12.872 18:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:13.130 [2024-07-15 18:43:58.459826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:13.130 [2024-07-15 18:43:58.459871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:13.130 [2024-07-15 18:43:58.459890] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a87370 00:29:13.130 [2024-07-15 18:43:58.459899] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:13.130 [2024-07-15 18:43:58.461398] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:13.130 [2024-07-15 18:43:58.461425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:13.130 BaseBdev2 00:29:13.130 18:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:29:13.388 spare_malloc 00:29:13.388 18:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:13.388 spare_delay 00:29:13.389 18:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:13.647 [2024-07-15 18:43:59.162451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:13.647 [2024-07-15 18:43:59.162493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:13.647 [2024-07-15 18:43:59.162509] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a87a10 00:29:13.647 [2024-07-15 18:43:59.162519] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:13.647 [2024-07-15 18:43:59.164000] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:13.647 [2024-07-15 18:43:59.164026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:13.647 spare 00:29:13.647 18:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:29:13.905 [2024-07-15 18:43:59.415459] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:13.905 [2024-07-15 18:43:59.416834] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:13.905 [2024-07-15 18:43:59.417012] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1a7bd80 00:29:13.905 [2024-07-15 18:43:59.417025] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:29:13.905 [2024-07-15 18:43:59.417096] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a85ab0 00:29:13.905 [2024-07-15 18:43:59.417183] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1a7bd80 00:29:13.905 [2024-07-15 18:43:59.417191] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1a7bd80 00:29:13.905 [2024-07-15 18:43:59.417248] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:13.905 18:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:13.905 18:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:13.905 18:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:13.905 18:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:13.905 18:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:13.905 18:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:13.905 18:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:13.905 18:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:13.905 18:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:13.905 18:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:13.905 18:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:13.905 18:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:14.163 18:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:14.163 "name": "raid_bdev1", 00:29:14.163 "uuid": "37a29742-dbf1-4ff2-8de7-62122e219de1", 00:29:14.163 "strip_size_kb": 0, 00:29:14.163 "state": "online", 00:29:14.163 "raid_level": "raid1", 00:29:14.163 "superblock": true, 00:29:14.163 "num_base_bdevs": 2, 00:29:14.163 "num_base_bdevs_discovered": 2, 00:29:14.163 "num_base_bdevs_operational": 2, 00:29:14.163 "base_bdevs_list": [ 00:29:14.163 { 00:29:14.163 "name": "BaseBdev1", 00:29:14.163 "uuid": "18913bf6-68cc-54e5-8ba3-d707fd37c13b", 00:29:14.163 "is_configured": true, 00:29:14.163 "data_offset": 256, 00:29:14.163 "data_size": 7936 00:29:14.163 }, 00:29:14.163 { 00:29:14.163 "name": "BaseBdev2", 00:29:14.163 "uuid": "e205577f-57bf-50e3-9862-4a78f87f78ea", 00:29:14.163 "is_configured": true, 00:29:14.163 "data_offset": 256, 00:29:14.163 "data_size": 7936 00:29:14.163 } 00:29:14.163 ] 00:29:14.163 }' 00:29:14.163 18:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:14.163 18:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:15.097 18:44:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:15.097 18:44:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:29:15.097 [2024-07-15 18:44:00.634999] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:15.355 18:44:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:29:15.355 18:44:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:15.355 18:44:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:15.614 18:44:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:29:15.614 18:44:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:29:15.614 18:44:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # '[' false = true ']' 00:29:15.614 18:44:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:15.614 [2024-07-15 18:44:01.156117] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:15.873 18:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:15.873 18:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:15.873 18:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:15.873 18:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:15.873 18:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:15.873 18:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:15.873 18:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:15.873 18:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:15.873 18:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:15.873 18:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:15.873 18:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:15.873 18:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:16.132 18:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:16.132 "name": "raid_bdev1", 00:29:16.132 "uuid": "37a29742-dbf1-4ff2-8de7-62122e219de1", 00:29:16.132 "strip_size_kb": 0, 00:29:16.132 "state": "online", 00:29:16.132 "raid_level": "raid1", 00:29:16.132 "superblock": true, 00:29:16.132 "num_base_bdevs": 2, 00:29:16.132 "num_base_bdevs_discovered": 1, 00:29:16.132 "num_base_bdevs_operational": 1, 00:29:16.132 "base_bdevs_list": [ 00:29:16.132 { 00:29:16.132 "name": null, 00:29:16.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:16.132 "is_configured": false, 00:29:16.132 "data_offset": 256, 00:29:16.132 "data_size": 7936 00:29:16.132 }, 00:29:16.132 { 00:29:16.132 "name": "BaseBdev2", 00:29:16.132 "uuid": "e205577f-57bf-50e3-9862-4a78f87f78ea", 00:29:16.132 "is_configured": true, 00:29:16.132 "data_offset": 256, 00:29:16.132 "data_size": 7936 00:29:16.132 } 00:29:16.132 ] 00:29:16.132 }' 00:29:16.132 18:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:16.132 18:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:16.699 18:44:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:16.957 [2024-07-15 18:44:02.307236] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:16.957 [2024-07-15 18:44:02.310760] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a7bcd0 00:29:16.957 [2024-07-15 18:44:02.312814] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:16.957 18:44:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # sleep 1 00:29:17.895 18:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:17.895 18:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:17.895 18:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:17.895 18:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:17.895 18:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:17.895 18:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:17.895 18:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:18.155 18:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:18.155 "name": "raid_bdev1", 00:29:18.155 "uuid": "37a29742-dbf1-4ff2-8de7-62122e219de1", 00:29:18.155 "strip_size_kb": 0, 00:29:18.155 "state": "online", 00:29:18.155 "raid_level": "raid1", 00:29:18.155 "superblock": true, 00:29:18.155 "num_base_bdevs": 2, 00:29:18.155 "num_base_bdevs_discovered": 2, 00:29:18.155 "num_base_bdevs_operational": 2, 00:29:18.155 "process": { 00:29:18.155 "type": "rebuild", 00:29:18.155 "target": "spare", 00:29:18.155 "progress": { 00:29:18.155 "blocks": 3072, 00:29:18.155 "percent": 38 00:29:18.155 } 00:29:18.155 }, 00:29:18.155 "base_bdevs_list": [ 00:29:18.155 { 00:29:18.155 "name": "spare", 00:29:18.155 "uuid": "0d7dac92-6aa4-5c04-8b06-e8e395233548", 00:29:18.155 "is_configured": true, 00:29:18.155 "data_offset": 256, 00:29:18.155 "data_size": 7936 00:29:18.155 }, 00:29:18.155 { 00:29:18.155 "name": "BaseBdev2", 00:29:18.155 "uuid": "e205577f-57bf-50e3-9862-4a78f87f78ea", 00:29:18.155 "is_configured": true, 00:29:18.155 "data_offset": 256, 00:29:18.155 "data_size": 7936 00:29:18.155 } 00:29:18.155 ] 00:29:18.155 }' 00:29:18.155 18:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:18.155 18:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:18.155 18:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:18.156 18:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:18.156 18:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:18.414 [2024-07-15 18:44:03.933901] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:18.673 [2024-07-15 18:44:04.025661] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:18.673 [2024-07-15 18:44:04.025709] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:18.673 [2024-07-15 18:44:04.025723] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:18.673 [2024-07-15 18:44:04.025729] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:18.673 18:44:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:18.673 18:44:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:18.673 18:44:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:18.673 18:44:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:18.673 18:44:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:18.673 18:44:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:18.673 18:44:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:18.673 18:44:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:18.673 18:44:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:18.673 18:44:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:18.673 18:44:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:18.673 18:44:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:18.931 18:44:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:18.932 "name": "raid_bdev1", 00:29:18.932 "uuid": "37a29742-dbf1-4ff2-8de7-62122e219de1", 00:29:18.932 "strip_size_kb": 0, 00:29:18.932 "state": "online", 00:29:18.932 "raid_level": "raid1", 00:29:18.932 "superblock": true, 00:29:18.932 "num_base_bdevs": 2, 00:29:18.932 "num_base_bdevs_discovered": 1, 00:29:18.932 "num_base_bdevs_operational": 1, 00:29:18.932 "base_bdevs_list": [ 00:29:18.932 { 00:29:18.932 "name": null, 00:29:18.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:18.932 "is_configured": false, 00:29:18.932 "data_offset": 256, 00:29:18.932 "data_size": 7936 00:29:18.932 }, 00:29:18.932 { 00:29:18.932 "name": "BaseBdev2", 00:29:18.932 "uuid": "e205577f-57bf-50e3-9862-4a78f87f78ea", 00:29:18.932 "is_configured": true, 00:29:18.932 "data_offset": 256, 00:29:18.932 "data_size": 7936 00:29:18.932 } 00:29:18.932 ] 00:29:18.932 }' 00:29:18.932 18:44:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:18.932 18:44:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:19.500 18:44:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:19.500 18:44:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:19.500 18:44:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:19.500 18:44:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:19.500 18:44:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:19.500 18:44:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:19.500 18:44:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:19.759 18:44:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:19.759 "name": "raid_bdev1", 00:29:19.759 "uuid": "37a29742-dbf1-4ff2-8de7-62122e219de1", 00:29:19.759 "strip_size_kb": 0, 00:29:19.759 "state": "online", 00:29:19.759 "raid_level": "raid1", 00:29:19.759 "superblock": true, 00:29:19.759 "num_base_bdevs": 2, 00:29:19.759 "num_base_bdevs_discovered": 1, 00:29:19.759 "num_base_bdevs_operational": 1, 00:29:19.759 "base_bdevs_list": [ 00:29:19.759 { 00:29:19.759 "name": null, 00:29:19.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:19.759 "is_configured": false, 00:29:19.759 "data_offset": 256, 00:29:19.759 "data_size": 7936 00:29:19.759 }, 00:29:19.759 { 00:29:19.759 "name": "BaseBdev2", 00:29:19.759 "uuid": "e205577f-57bf-50e3-9862-4a78f87f78ea", 00:29:19.759 "is_configured": true, 00:29:19.759 "data_offset": 256, 00:29:19.759 "data_size": 7936 00:29:19.759 } 00:29:19.759 ] 00:29:19.759 }' 00:29:19.759 18:44:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:19.759 18:44:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:19.759 18:44:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:19.759 18:44:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:19.759 18:44:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:20.018 [2024-07-15 18:44:05.513426] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:20.018 [2024-07-15 18:44:05.516890] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a7b7e0 00:29:20.018 [2024-07-15 18:44:05.518390] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:20.018 18:44:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:21.396 18:44:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:21.396 18:44:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:21.396 18:44:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:21.396 18:44:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:21.396 18:44:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:21.396 18:44:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:21.396 18:44:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:21.396 18:44:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:21.396 "name": "raid_bdev1", 00:29:21.396 "uuid": "37a29742-dbf1-4ff2-8de7-62122e219de1", 00:29:21.396 "strip_size_kb": 0, 00:29:21.396 "state": "online", 00:29:21.396 "raid_level": "raid1", 00:29:21.396 "superblock": true, 00:29:21.396 "num_base_bdevs": 2, 00:29:21.396 "num_base_bdevs_discovered": 2, 00:29:21.396 "num_base_bdevs_operational": 2, 00:29:21.396 "process": { 00:29:21.396 "type": "rebuild", 00:29:21.396 "target": "spare", 00:29:21.396 "progress": { 00:29:21.396 "blocks": 3072, 00:29:21.396 "percent": 38 00:29:21.396 } 00:29:21.396 }, 00:29:21.396 "base_bdevs_list": [ 00:29:21.396 { 00:29:21.396 "name": "spare", 00:29:21.396 "uuid": "0d7dac92-6aa4-5c04-8b06-e8e395233548", 00:29:21.396 "is_configured": true, 00:29:21.396 "data_offset": 256, 00:29:21.396 "data_size": 7936 00:29:21.396 }, 00:29:21.396 { 00:29:21.396 "name": "BaseBdev2", 00:29:21.396 "uuid": "e205577f-57bf-50e3-9862-4a78f87f78ea", 00:29:21.396 "is_configured": true, 00:29:21.396 "data_offset": 256, 00:29:21.396 "data_size": 7936 00:29:21.396 } 00:29:21.396 ] 00:29:21.396 }' 00:29:21.396 18:44:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:21.396 18:44:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:21.396 18:44:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:21.396 18:44:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:21.396 18:44:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:29:21.396 18:44:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:29:21.396 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:29:21.396 18:44:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:29:21.396 18:44:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:29:21.396 18:44:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:29:21.396 18:44:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@705 -- # local timeout=1205 00:29:21.396 18:44:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:21.396 18:44:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:21.396 18:44:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:21.396 18:44:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:21.396 18:44:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:21.396 18:44:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:21.396 18:44:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:21.396 18:44:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:21.655 18:44:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:21.655 "name": "raid_bdev1", 00:29:21.655 "uuid": "37a29742-dbf1-4ff2-8de7-62122e219de1", 00:29:21.655 "strip_size_kb": 0, 00:29:21.655 "state": "online", 00:29:21.655 "raid_level": "raid1", 00:29:21.655 "superblock": true, 00:29:21.655 "num_base_bdevs": 2, 00:29:21.655 "num_base_bdevs_discovered": 2, 00:29:21.655 "num_base_bdevs_operational": 2, 00:29:21.655 "process": { 00:29:21.655 "type": "rebuild", 00:29:21.655 "target": "spare", 00:29:21.655 "progress": { 00:29:21.655 "blocks": 3840, 00:29:21.655 "percent": 48 00:29:21.655 } 00:29:21.655 }, 00:29:21.655 "base_bdevs_list": [ 00:29:21.655 { 00:29:21.655 "name": "spare", 00:29:21.655 "uuid": "0d7dac92-6aa4-5c04-8b06-e8e395233548", 00:29:21.655 "is_configured": true, 00:29:21.655 "data_offset": 256, 00:29:21.655 "data_size": 7936 00:29:21.655 }, 00:29:21.655 { 00:29:21.655 "name": "BaseBdev2", 00:29:21.655 "uuid": "e205577f-57bf-50e3-9862-4a78f87f78ea", 00:29:21.655 "is_configured": true, 00:29:21.655 "data_offset": 256, 00:29:21.656 "data_size": 7936 00:29:21.656 } 00:29:21.656 ] 00:29:21.656 }' 00:29:21.656 18:44:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:21.656 18:44:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:21.656 18:44:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:21.914 18:44:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:21.914 18:44:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:22.851 18:44:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:22.851 18:44:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:22.851 18:44:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:22.851 18:44:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:22.851 18:44:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:22.851 18:44:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:22.851 18:44:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:22.851 18:44:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:23.110 18:44:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:23.110 "name": "raid_bdev1", 00:29:23.110 "uuid": "37a29742-dbf1-4ff2-8de7-62122e219de1", 00:29:23.110 "strip_size_kb": 0, 00:29:23.110 "state": "online", 00:29:23.110 "raid_level": "raid1", 00:29:23.110 "superblock": true, 00:29:23.110 "num_base_bdevs": 2, 00:29:23.110 "num_base_bdevs_discovered": 2, 00:29:23.110 "num_base_bdevs_operational": 2, 00:29:23.110 "process": { 00:29:23.110 "type": "rebuild", 00:29:23.110 "target": "spare", 00:29:23.110 "progress": { 00:29:23.110 "blocks": 7424, 00:29:23.110 "percent": 93 00:29:23.110 } 00:29:23.110 }, 00:29:23.110 "base_bdevs_list": [ 00:29:23.110 { 00:29:23.110 "name": "spare", 00:29:23.110 "uuid": "0d7dac92-6aa4-5c04-8b06-e8e395233548", 00:29:23.110 "is_configured": true, 00:29:23.110 "data_offset": 256, 00:29:23.110 "data_size": 7936 00:29:23.110 }, 00:29:23.110 { 00:29:23.110 "name": "BaseBdev2", 00:29:23.110 "uuid": "e205577f-57bf-50e3-9862-4a78f87f78ea", 00:29:23.110 "is_configured": true, 00:29:23.110 "data_offset": 256, 00:29:23.110 "data_size": 7936 00:29:23.110 } 00:29:23.110 ] 00:29:23.110 }' 00:29:23.110 18:44:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:23.110 18:44:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:23.110 18:44:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:23.110 18:44:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:23.110 18:44:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:23.110 [2024-07-15 18:44:08.641645] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:23.110 [2024-07-15 18:44:08.641702] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:23.110 [2024-07-15 18:44:08.641781] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:24.487 18:44:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:24.487 18:44:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:24.487 18:44:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:24.487 18:44:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:24.487 18:44:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:24.487 18:44:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:24.487 18:44:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:24.487 18:44:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:24.487 18:44:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:24.487 "name": "raid_bdev1", 00:29:24.487 "uuid": "37a29742-dbf1-4ff2-8de7-62122e219de1", 00:29:24.487 "strip_size_kb": 0, 00:29:24.487 "state": "online", 00:29:24.487 "raid_level": "raid1", 00:29:24.487 "superblock": true, 00:29:24.487 "num_base_bdevs": 2, 00:29:24.487 "num_base_bdevs_discovered": 2, 00:29:24.487 "num_base_bdevs_operational": 2, 00:29:24.487 "base_bdevs_list": [ 00:29:24.487 { 00:29:24.487 "name": "spare", 00:29:24.488 "uuid": "0d7dac92-6aa4-5c04-8b06-e8e395233548", 00:29:24.488 "is_configured": true, 00:29:24.488 "data_offset": 256, 00:29:24.488 "data_size": 7936 00:29:24.488 }, 00:29:24.488 { 00:29:24.488 "name": "BaseBdev2", 00:29:24.488 "uuid": "e205577f-57bf-50e3-9862-4a78f87f78ea", 00:29:24.488 "is_configured": true, 00:29:24.488 "data_offset": 256, 00:29:24.488 "data_size": 7936 00:29:24.488 } 00:29:24.488 ] 00:29:24.488 }' 00:29:24.488 18:44:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:24.488 18:44:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:24.488 18:44:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:24.488 18:44:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:24.488 18:44:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # break 00:29:24.488 18:44:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:24.488 18:44:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:24.488 18:44:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:24.488 18:44:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:24.488 18:44:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:24.488 18:44:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:24.488 18:44:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:24.746 18:44:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:24.746 "name": "raid_bdev1", 00:29:24.746 "uuid": "37a29742-dbf1-4ff2-8de7-62122e219de1", 00:29:24.746 "strip_size_kb": 0, 00:29:24.746 "state": "online", 00:29:24.746 "raid_level": "raid1", 00:29:24.746 "superblock": true, 00:29:24.746 "num_base_bdevs": 2, 00:29:24.746 "num_base_bdevs_discovered": 2, 00:29:24.746 "num_base_bdevs_operational": 2, 00:29:24.746 "base_bdevs_list": [ 00:29:24.746 { 00:29:24.746 "name": "spare", 00:29:24.746 "uuid": "0d7dac92-6aa4-5c04-8b06-e8e395233548", 00:29:24.747 "is_configured": true, 00:29:24.747 "data_offset": 256, 00:29:24.747 "data_size": 7936 00:29:24.747 }, 00:29:24.747 { 00:29:24.747 "name": "BaseBdev2", 00:29:24.747 "uuid": "e205577f-57bf-50e3-9862-4a78f87f78ea", 00:29:24.747 "is_configured": true, 00:29:24.747 "data_offset": 256, 00:29:24.747 "data_size": 7936 00:29:24.747 } 00:29:24.747 ] 00:29:24.747 }' 00:29:24.747 18:44:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:24.747 18:44:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:24.747 18:44:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:25.005 18:44:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:25.005 18:44:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:25.005 18:44:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:25.005 18:44:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:25.005 18:44:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:25.005 18:44:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:25.005 18:44:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:25.005 18:44:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:25.005 18:44:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:25.005 18:44:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:25.005 18:44:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:25.005 18:44:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:25.005 18:44:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:25.264 18:44:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:25.264 "name": "raid_bdev1", 00:29:25.264 "uuid": "37a29742-dbf1-4ff2-8de7-62122e219de1", 00:29:25.264 "strip_size_kb": 0, 00:29:25.264 "state": "online", 00:29:25.264 "raid_level": "raid1", 00:29:25.264 "superblock": true, 00:29:25.264 "num_base_bdevs": 2, 00:29:25.264 "num_base_bdevs_discovered": 2, 00:29:25.264 "num_base_bdevs_operational": 2, 00:29:25.264 "base_bdevs_list": [ 00:29:25.264 { 00:29:25.264 "name": "spare", 00:29:25.264 "uuid": "0d7dac92-6aa4-5c04-8b06-e8e395233548", 00:29:25.264 "is_configured": true, 00:29:25.264 "data_offset": 256, 00:29:25.264 "data_size": 7936 00:29:25.264 }, 00:29:25.264 { 00:29:25.264 "name": "BaseBdev2", 00:29:25.264 "uuid": "e205577f-57bf-50e3-9862-4a78f87f78ea", 00:29:25.265 "is_configured": true, 00:29:25.265 "data_offset": 256, 00:29:25.265 "data_size": 7936 00:29:25.265 } 00:29:25.265 ] 00:29:25.265 }' 00:29:25.265 18:44:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:25.265 18:44:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:25.832 18:44:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:26.091 [2024-07-15 18:44:11.401091] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:26.091 [2024-07-15 18:44:11.401113] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:26.091 [2024-07-15 18:44:11.401168] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:26.091 [2024-07-15 18:44:11.401219] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:26.091 [2024-07-15 18:44:11.401229] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a7bd80 name raid_bdev1, state offline 00:29:26.091 18:44:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:26.091 18:44:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # jq length 00:29:26.350 18:44:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:29:26.350 18:44:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # '[' false = true ']' 00:29:26.350 18:44:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:29:26.350 18:44:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:26.610 18:44:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:26.869 [2024-07-15 18:44:12.179138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:26.869 [2024-07-15 18:44:12.179185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:26.869 [2024-07-15 18:44:12.179202] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a857b0 00:29:26.869 [2024-07-15 18:44:12.179212] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:26.869 [2024-07-15 18:44:12.181032] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:26.869 [2024-07-15 18:44:12.181060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:26.869 [2024-07-15 18:44:12.181112] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:26.869 [2024-07-15 18:44:12.181135] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:26.869 [2024-07-15 18:44:12.181220] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:26.869 spare 00:29:26.869 18:44:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:26.869 18:44:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:26.869 18:44:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:26.869 18:44:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:26.869 18:44:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:26.869 18:44:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:26.869 18:44:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:26.869 18:44:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:26.869 18:44:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:26.869 18:44:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:26.869 18:44:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:26.869 18:44:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:26.869 [2024-07-15 18:44:12.281535] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1a7c8f0 00:29:26.869 [2024-07-15 18:44:12.281555] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:29:26.869 [2024-07-15 18:44:12.281635] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18f0310 00:29:26.869 [2024-07-15 18:44:12.281733] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1a7c8f0 00:29:26.869 [2024-07-15 18:44:12.281741] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1a7c8f0 00:29:26.869 [2024-07-15 18:44:12.281807] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:27.128 18:44:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:27.128 "name": "raid_bdev1", 00:29:27.128 "uuid": "37a29742-dbf1-4ff2-8de7-62122e219de1", 00:29:27.128 "strip_size_kb": 0, 00:29:27.128 "state": "online", 00:29:27.128 "raid_level": "raid1", 00:29:27.128 "superblock": true, 00:29:27.128 "num_base_bdevs": 2, 00:29:27.128 "num_base_bdevs_discovered": 2, 00:29:27.128 "num_base_bdevs_operational": 2, 00:29:27.128 "base_bdevs_list": [ 00:29:27.128 { 00:29:27.128 "name": "spare", 00:29:27.128 "uuid": "0d7dac92-6aa4-5c04-8b06-e8e395233548", 00:29:27.128 "is_configured": true, 00:29:27.128 "data_offset": 256, 00:29:27.128 "data_size": 7936 00:29:27.128 }, 00:29:27.128 { 00:29:27.128 "name": "BaseBdev2", 00:29:27.128 "uuid": "e205577f-57bf-50e3-9862-4a78f87f78ea", 00:29:27.128 "is_configured": true, 00:29:27.128 "data_offset": 256, 00:29:27.128 "data_size": 7936 00:29:27.128 } 00:29:27.128 ] 00:29:27.128 }' 00:29:27.128 18:44:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:27.128 18:44:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:27.694 18:44:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:27.695 18:44:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:27.695 18:44:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:27.695 18:44:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:27.695 18:44:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:27.695 18:44:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:27.695 18:44:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:27.976 18:44:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:27.976 "name": "raid_bdev1", 00:29:27.976 "uuid": "37a29742-dbf1-4ff2-8de7-62122e219de1", 00:29:27.976 "strip_size_kb": 0, 00:29:27.976 "state": "online", 00:29:27.976 "raid_level": "raid1", 00:29:27.976 "superblock": true, 00:29:27.976 "num_base_bdevs": 2, 00:29:27.976 "num_base_bdevs_discovered": 2, 00:29:27.976 "num_base_bdevs_operational": 2, 00:29:27.976 "base_bdevs_list": [ 00:29:27.977 { 00:29:27.977 "name": "spare", 00:29:27.977 "uuid": "0d7dac92-6aa4-5c04-8b06-e8e395233548", 00:29:27.977 "is_configured": true, 00:29:27.977 "data_offset": 256, 00:29:27.977 "data_size": 7936 00:29:27.977 }, 00:29:27.977 { 00:29:27.977 "name": "BaseBdev2", 00:29:27.977 "uuid": "e205577f-57bf-50e3-9862-4a78f87f78ea", 00:29:27.977 "is_configured": true, 00:29:27.977 "data_offset": 256, 00:29:27.977 "data_size": 7936 00:29:27.977 } 00:29:27.977 ] 00:29:27.977 }' 00:29:27.977 18:44:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:27.977 18:44:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:27.977 18:44:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:27.977 18:44:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:27.977 18:44:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:27.977 18:44:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:29:28.233 18:44:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:29:28.233 18:44:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:28.491 [2024-07-15 18:44:13.960071] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:28.491 18:44:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:28.491 18:44:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:28.491 18:44:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:28.491 18:44:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:28.491 18:44:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:28.491 18:44:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:28.491 18:44:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:28.491 18:44:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:28.491 18:44:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:28.491 18:44:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:28.491 18:44:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:28.491 18:44:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:28.748 18:44:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:28.748 "name": "raid_bdev1", 00:29:28.748 "uuid": "37a29742-dbf1-4ff2-8de7-62122e219de1", 00:29:28.748 "strip_size_kb": 0, 00:29:28.748 "state": "online", 00:29:28.748 "raid_level": "raid1", 00:29:28.748 "superblock": true, 00:29:28.748 "num_base_bdevs": 2, 00:29:28.749 "num_base_bdevs_discovered": 1, 00:29:28.749 "num_base_bdevs_operational": 1, 00:29:28.749 "base_bdevs_list": [ 00:29:28.749 { 00:29:28.749 "name": null, 00:29:28.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:28.749 "is_configured": false, 00:29:28.749 "data_offset": 256, 00:29:28.749 "data_size": 7936 00:29:28.749 }, 00:29:28.749 { 00:29:28.749 "name": "BaseBdev2", 00:29:28.749 "uuid": "e205577f-57bf-50e3-9862-4a78f87f78ea", 00:29:28.749 "is_configured": true, 00:29:28.749 "data_offset": 256, 00:29:28.749 "data_size": 7936 00:29:28.749 } 00:29:28.749 ] 00:29:28.749 }' 00:29:28.749 18:44:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:28.749 18:44:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:29.315 18:44:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:29.573 [2024-07-15 18:44:15.079112] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:29.573 [2024-07-15 18:44:15.079250] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:29:29.573 [2024-07-15 18:44:15.079264] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:29.573 [2024-07-15 18:44:15.079287] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:29.573 [2024-07-15 18:44:15.082631] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a87e90 00:29:29.573 [2024-07-15 18:44:15.084730] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:29.573 18:44:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # sleep 1 00:29:30.948 18:44:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:30.948 18:44:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:30.948 18:44:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:30.948 18:44:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:30.948 18:44:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:30.948 18:44:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:30.948 18:44:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:30.948 18:44:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:30.948 "name": "raid_bdev1", 00:29:30.948 "uuid": "37a29742-dbf1-4ff2-8de7-62122e219de1", 00:29:30.948 "strip_size_kb": 0, 00:29:30.948 "state": "online", 00:29:30.948 "raid_level": "raid1", 00:29:30.948 "superblock": true, 00:29:30.948 "num_base_bdevs": 2, 00:29:30.948 "num_base_bdevs_discovered": 2, 00:29:30.948 "num_base_bdevs_operational": 2, 00:29:30.948 "process": { 00:29:30.948 "type": "rebuild", 00:29:30.948 "target": "spare", 00:29:30.948 "progress": { 00:29:30.948 "blocks": 2816, 00:29:30.948 "percent": 35 00:29:30.948 } 00:29:30.948 }, 00:29:30.948 "base_bdevs_list": [ 00:29:30.948 { 00:29:30.948 "name": "spare", 00:29:30.948 "uuid": "0d7dac92-6aa4-5c04-8b06-e8e395233548", 00:29:30.948 "is_configured": true, 00:29:30.948 "data_offset": 256, 00:29:30.948 "data_size": 7936 00:29:30.948 }, 00:29:30.948 { 00:29:30.948 "name": "BaseBdev2", 00:29:30.948 "uuid": "e205577f-57bf-50e3-9862-4a78f87f78ea", 00:29:30.948 "is_configured": true, 00:29:30.948 "data_offset": 256, 00:29:30.948 "data_size": 7936 00:29:30.948 } 00:29:30.948 ] 00:29:30.948 }' 00:29:30.948 18:44:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:30.948 18:44:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:30.948 18:44:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:30.948 18:44:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:30.948 18:44:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:31.207 [2024-07-15 18:44:16.619328] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:31.207 [2024-07-15 18:44:16.696817] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:31.207 [2024-07-15 18:44:16.696861] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:31.207 [2024-07-15 18:44:16.696875] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:31.207 [2024-07-15 18:44:16.696882] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:31.207 18:44:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:31.207 18:44:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:31.207 18:44:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:31.207 18:44:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:31.207 18:44:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:31.207 18:44:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:31.207 18:44:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:31.207 18:44:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:31.207 18:44:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:31.207 18:44:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:31.207 18:44:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:31.207 18:44:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:31.465 18:44:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:31.465 "name": "raid_bdev1", 00:29:31.465 "uuid": "37a29742-dbf1-4ff2-8de7-62122e219de1", 00:29:31.465 "strip_size_kb": 0, 00:29:31.465 "state": "online", 00:29:31.465 "raid_level": "raid1", 00:29:31.465 "superblock": true, 00:29:31.465 "num_base_bdevs": 2, 00:29:31.465 "num_base_bdevs_discovered": 1, 00:29:31.465 "num_base_bdevs_operational": 1, 00:29:31.465 "base_bdevs_list": [ 00:29:31.465 { 00:29:31.465 "name": null, 00:29:31.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:31.465 "is_configured": false, 00:29:31.465 "data_offset": 256, 00:29:31.465 "data_size": 7936 00:29:31.465 }, 00:29:31.465 { 00:29:31.465 "name": "BaseBdev2", 00:29:31.465 "uuid": "e205577f-57bf-50e3-9862-4a78f87f78ea", 00:29:31.465 "is_configured": true, 00:29:31.465 "data_offset": 256, 00:29:31.465 "data_size": 7936 00:29:31.465 } 00:29:31.465 ] 00:29:31.465 }' 00:29:31.465 18:44:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:31.465 18:44:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:32.400 18:44:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:32.400 [2024-07-15 18:44:17.827542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:32.400 [2024-07-15 18:44:17.827587] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:32.400 [2024-07-15 18:44:17.827607] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a853f0 00:29:32.400 [2024-07-15 18:44:17.827616] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:32.400 [2024-07-15 18:44:17.827793] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:32.400 [2024-07-15 18:44:17.827815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:32.400 [2024-07-15 18:44:17.827869] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:32.400 [2024-07-15 18:44:17.827879] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:29:32.400 [2024-07-15 18:44:17.827888] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:32.400 [2024-07-15 18:44:17.827904] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:32.400 [2024-07-15 18:44:17.831269] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18f95d0 00:29:32.400 [2024-07-15 18:44:17.832767] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:32.400 spare 00:29:32.400 18:44:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # sleep 1 00:29:33.336 18:44:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:33.336 18:44:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:33.336 18:44:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:33.336 18:44:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:33.336 18:44:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:33.336 18:44:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:33.336 18:44:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:33.595 18:44:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:33.595 "name": "raid_bdev1", 00:29:33.595 "uuid": "37a29742-dbf1-4ff2-8de7-62122e219de1", 00:29:33.595 "strip_size_kb": 0, 00:29:33.595 "state": "online", 00:29:33.595 "raid_level": "raid1", 00:29:33.595 "superblock": true, 00:29:33.595 "num_base_bdevs": 2, 00:29:33.595 "num_base_bdevs_discovered": 2, 00:29:33.595 "num_base_bdevs_operational": 2, 00:29:33.595 "process": { 00:29:33.595 "type": "rebuild", 00:29:33.595 "target": "spare", 00:29:33.595 "progress": { 00:29:33.595 "blocks": 3072, 00:29:33.595 "percent": 38 00:29:33.595 } 00:29:33.595 }, 00:29:33.595 "base_bdevs_list": [ 00:29:33.595 { 00:29:33.595 "name": "spare", 00:29:33.595 "uuid": "0d7dac92-6aa4-5c04-8b06-e8e395233548", 00:29:33.595 "is_configured": true, 00:29:33.595 "data_offset": 256, 00:29:33.595 "data_size": 7936 00:29:33.595 }, 00:29:33.595 { 00:29:33.595 "name": "BaseBdev2", 00:29:33.595 "uuid": "e205577f-57bf-50e3-9862-4a78f87f78ea", 00:29:33.595 "is_configured": true, 00:29:33.595 "data_offset": 256, 00:29:33.595 "data_size": 7936 00:29:33.595 } 00:29:33.595 ] 00:29:33.595 }' 00:29:33.595 18:44:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:33.854 18:44:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:33.854 18:44:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:33.854 18:44:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:33.854 18:44:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:34.115 [2024-07-15 18:44:19.438736] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:34.115 [2024-07-15 18:44:19.444918] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:34.115 [2024-07-15 18:44:19.444965] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:34.115 [2024-07-15 18:44:19.444979] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:34.115 [2024-07-15 18:44:19.444985] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:34.115 18:44:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:34.115 18:44:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:34.115 18:44:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:34.115 18:44:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:34.115 18:44:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:34.115 18:44:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:34.115 18:44:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:34.115 18:44:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:34.115 18:44:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:34.115 18:44:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:34.115 18:44:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:34.115 18:44:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:34.374 18:44:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:34.374 "name": "raid_bdev1", 00:29:34.374 "uuid": "37a29742-dbf1-4ff2-8de7-62122e219de1", 00:29:34.374 "strip_size_kb": 0, 00:29:34.374 "state": "online", 00:29:34.374 "raid_level": "raid1", 00:29:34.374 "superblock": true, 00:29:34.374 "num_base_bdevs": 2, 00:29:34.374 "num_base_bdevs_discovered": 1, 00:29:34.374 "num_base_bdevs_operational": 1, 00:29:34.374 "base_bdevs_list": [ 00:29:34.374 { 00:29:34.374 "name": null, 00:29:34.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:34.374 "is_configured": false, 00:29:34.374 "data_offset": 256, 00:29:34.374 "data_size": 7936 00:29:34.374 }, 00:29:34.374 { 00:29:34.374 "name": "BaseBdev2", 00:29:34.374 "uuid": "e205577f-57bf-50e3-9862-4a78f87f78ea", 00:29:34.374 "is_configured": true, 00:29:34.374 "data_offset": 256, 00:29:34.374 "data_size": 7936 00:29:34.374 } 00:29:34.374 ] 00:29:34.374 }' 00:29:34.374 18:44:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:34.374 18:44:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:34.941 18:44:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:34.941 18:44:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:34.941 18:44:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:34.942 18:44:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:34.942 18:44:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:34.942 18:44:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:34.942 18:44:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:35.200 18:44:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:35.200 "name": "raid_bdev1", 00:29:35.200 "uuid": "37a29742-dbf1-4ff2-8de7-62122e219de1", 00:29:35.200 "strip_size_kb": 0, 00:29:35.200 "state": "online", 00:29:35.200 "raid_level": "raid1", 00:29:35.200 "superblock": true, 00:29:35.200 "num_base_bdevs": 2, 00:29:35.200 "num_base_bdevs_discovered": 1, 00:29:35.200 "num_base_bdevs_operational": 1, 00:29:35.200 "base_bdevs_list": [ 00:29:35.200 { 00:29:35.200 "name": null, 00:29:35.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:35.200 "is_configured": false, 00:29:35.200 "data_offset": 256, 00:29:35.200 "data_size": 7936 00:29:35.200 }, 00:29:35.200 { 00:29:35.200 "name": "BaseBdev2", 00:29:35.200 "uuid": "e205577f-57bf-50e3-9862-4a78f87f78ea", 00:29:35.200 "is_configured": true, 00:29:35.200 "data_offset": 256, 00:29:35.200 "data_size": 7936 00:29:35.200 } 00:29:35.200 ] 00:29:35.200 }' 00:29:35.200 18:44:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:35.200 18:44:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:35.200 18:44:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:35.200 18:44:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:35.200 18:44:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:29:35.458 18:44:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:35.717 [2024-07-15 18:44:21.213592] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:35.717 [2024-07-15 18:44:21.213638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:35.717 [2024-07-15 18:44:21.213655] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18f9cc0 00:29:35.717 [2024-07-15 18:44:21.213664] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:35.717 [2024-07-15 18:44:21.213817] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:35.717 [2024-07-15 18:44:21.213832] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:35.717 [2024-07-15 18:44:21.213875] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:29:35.717 [2024-07-15 18:44:21.213885] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:35.717 [2024-07-15 18:44:21.213892] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:35.717 BaseBdev1 00:29:35.717 18:44:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # sleep 1 00:29:36.692 18:44:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:36.692 18:44:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:36.692 18:44:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:36.692 18:44:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:36.692 18:44:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:36.692 18:44:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:36.692 18:44:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:36.692 18:44:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:36.692 18:44:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:36.692 18:44:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:36.692 18:44:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:36.692 18:44:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:37.257 18:44:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:37.257 "name": "raid_bdev1", 00:29:37.257 "uuid": "37a29742-dbf1-4ff2-8de7-62122e219de1", 00:29:37.257 "strip_size_kb": 0, 00:29:37.257 "state": "online", 00:29:37.257 "raid_level": "raid1", 00:29:37.257 "superblock": true, 00:29:37.257 "num_base_bdevs": 2, 00:29:37.257 "num_base_bdevs_discovered": 1, 00:29:37.257 "num_base_bdevs_operational": 1, 00:29:37.257 "base_bdevs_list": [ 00:29:37.257 { 00:29:37.257 "name": null, 00:29:37.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:37.257 "is_configured": false, 00:29:37.257 "data_offset": 256, 00:29:37.257 "data_size": 7936 00:29:37.257 }, 00:29:37.257 { 00:29:37.257 "name": "BaseBdev2", 00:29:37.257 "uuid": "e205577f-57bf-50e3-9862-4a78f87f78ea", 00:29:37.257 "is_configured": true, 00:29:37.257 "data_offset": 256, 00:29:37.257 "data_size": 7936 00:29:37.257 } 00:29:37.257 ] 00:29:37.257 }' 00:29:37.257 18:44:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:37.257 18:44:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:37.825 18:44:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:37.825 18:44:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:37.825 18:44:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:37.825 18:44:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:37.825 18:44:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:37.825 18:44:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:37.825 18:44:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:38.391 18:44:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:38.391 "name": "raid_bdev1", 00:29:38.391 "uuid": "37a29742-dbf1-4ff2-8de7-62122e219de1", 00:29:38.391 "strip_size_kb": 0, 00:29:38.391 "state": "online", 00:29:38.391 "raid_level": "raid1", 00:29:38.391 "superblock": true, 00:29:38.391 "num_base_bdevs": 2, 00:29:38.391 "num_base_bdevs_discovered": 1, 00:29:38.391 "num_base_bdevs_operational": 1, 00:29:38.391 "base_bdevs_list": [ 00:29:38.391 { 00:29:38.391 "name": null, 00:29:38.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:38.391 "is_configured": false, 00:29:38.391 "data_offset": 256, 00:29:38.391 "data_size": 7936 00:29:38.391 }, 00:29:38.391 { 00:29:38.391 "name": "BaseBdev2", 00:29:38.391 "uuid": "e205577f-57bf-50e3-9862-4a78f87f78ea", 00:29:38.391 "is_configured": true, 00:29:38.391 "data_offset": 256, 00:29:38.391 "data_size": 7936 00:29:38.391 } 00:29:38.391 ] 00:29:38.391 }' 00:29:38.391 18:44:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:38.391 18:44:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:38.391 18:44:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:38.650 18:44:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:38.650 18:44:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:38.650 18:44:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:29:38.650 18:44:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:38.650 18:44:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:38.650 18:44:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:38.650 18:44:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:38.650 18:44:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:38.650 18:44:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:38.651 18:44:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:38.651 18:44:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:38.651 18:44:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:29:38.651 18:44:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:38.908 [2024-07-15 18:44:24.430306] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:38.908 [2024-07-15 18:44:24.430425] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:38.908 [2024-07-15 18:44:24.430439] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:38.908 request: 00:29:38.908 { 00:29:38.908 "base_bdev": "BaseBdev1", 00:29:38.908 "raid_bdev": "raid_bdev1", 00:29:38.908 "method": "bdev_raid_add_base_bdev", 00:29:38.908 "req_id": 1 00:29:38.908 } 00:29:38.908 Got JSON-RPC error response 00:29:38.908 response: 00:29:38.908 { 00:29:38.908 "code": -22, 00:29:38.908 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:29:38.908 } 00:29:39.168 18:44:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:29:39.168 18:44:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:39.168 18:44:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:39.168 18:44:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:39.168 18:44:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # sleep 1 00:29:40.103 18:44:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:40.103 18:44:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:40.103 18:44:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:40.103 18:44:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:40.103 18:44:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:40.103 18:44:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:40.103 18:44:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:40.103 18:44:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:40.103 18:44:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:40.103 18:44:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:40.103 18:44:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:40.103 18:44:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:40.671 18:44:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:40.671 "name": "raid_bdev1", 00:29:40.671 "uuid": "37a29742-dbf1-4ff2-8de7-62122e219de1", 00:29:40.671 "strip_size_kb": 0, 00:29:40.671 "state": "online", 00:29:40.671 "raid_level": "raid1", 00:29:40.671 "superblock": true, 00:29:40.671 "num_base_bdevs": 2, 00:29:40.671 "num_base_bdevs_discovered": 1, 00:29:40.671 "num_base_bdevs_operational": 1, 00:29:40.671 "base_bdevs_list": [ 00:29:40.671 { 00:29:40.671 "name": null, 00:29:40.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:40.671 "is_configured": false, 00:29:40.671 "data_offset": 256, 00:29:40.671 "data_size": 7936 00:29:40.671 }, 00:29:40.671 { 00:29:40.671 "name": "BaseBdev2", 00:29:40.671 "uuid": "e205577f-57bf-50e3-9862-4a78f87f78ea", 00:29:40.671 "is_configured": true, 00:29:40.671 "data_offset": 256, 00:29:40.671 "data_size": 7936 00:29:40.671 } 00:29:40.671 ] 00:29:40.671 }' 00:29:40.671 18:44:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:40.671 18:44:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:41.239 18:44:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:41.239 18:44:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:41.239 18:44:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:41.239 18:44:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:41.239 18:44:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:41.239 18:44:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:41.239 18:44:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:41.497 18:44:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:41.497 "name": "raid_bdev1", 00:29:41.497 "uuid": "37a29742-dbf1-4ff2-8de7-62122e219de1", 00:29:41.497 "strip_size_kb": 0, 00:29:41.497 "state": "online", 00:29:41.497 "raid_level": "raid1", 00:29:41.497 "superblock": true, 00:29:41.497 "num_base_bdevs": 2, 00:29:41.497 "num_base_bdevs_discovered": 1, 00:29:41.497 "num_base_bdevs_operational": 1, 00:29:41.497 "base_bdevs_list": [ 00:29:41.497 { 00:29:41.497 "name": null, 00:29:41.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:41.497 "is_configured": false, 00:29:41.497 "data_offset": 256, 00:29:41.497 "data_size": 7936 00:29:41.497 }, 00:29:41.497 { 00:29:41.497 "name": "BaseBdev2", 00:29:41.497 "uuid": "e205577f-57bf-50e3-9862-4a78f87f78ea", 00:29:41.497 "is_configured": true, 00:29:41.497 "data_offset": 256, 00:29:41.497 "data_size": 7936 00:29:41.497 } 00:29:41.497 ] 00:29:41.497 }' 00:29:41.497 18:44:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:41.497 18:44:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:41.497 18:44:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:41.497 18:44:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:41.497 18:44:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # killprocess 2954543 00:29:41.497 18:44:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 2954543 ']' 00:29:41.497 18:44:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 2954543 00:29:41.497 18:44:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:29:41.498 18:44:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:41.498 18:44:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2954543 00:29:41.757 18:44:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:41.757 18:44:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:41.757 18:44:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2954543' 00:29:41.757 killing process with pid 2954543 00:29:41.757 18:44:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 2954543 00:29:41.757 Received shutdown signal, test time was about 60.000000 seconds 00:29:41.757 00:29:41.757 Latency(us) 00:29:41.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.757 =================================================================================================================== 00:29:41.757 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:41.757 [2024-07-15 18:44:27.050494] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:41.757 [2024-07-15 18:44:27.050576] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:41.757 [2024-07-15 18:44:27.050619] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:41.757 [2024-07-15 18:44:27.050629] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a7c8f0 name raid_bdev1, state offline 00:29:41.757 18:44:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 2954543 00:29:41.757 [2024-07-15 18:44:27.078247] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:41.757 18:44:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # return 0 00:29:41.757 00:29:41.757 real 0m30.336s 00:29:41.757 user 0m50.239s 00:29:41.757 sys 0m3.128s 00:29:41.757 18:44:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:41.757 18:44:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:41.757 ************************************ 00:29:41.757 END TEST raid_rebuild_test_sb_md_interleaved 00:29:41.757 ************************************ 00:29:42.016 18:44:27 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:29:42.016 18:44:27 bdev_raid -- bdev/bdev_raid.sh@916 -- # trap - EXIT 00:29:42.016 18:44:27 bdev_raid -- bdev/bdev_raid.sh@917 -- # cleanup 00:29:42.016 18:44:27 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 2954543 ']' 00:29:42.016 18:44:27 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 2954543 00:29:42.016 18:44:27 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:29:42.016 00:29:42.016 real 19m55.976s 00:29:42.016 user 35m0.429s 00:29:42.016 sys 2m44.530s 00:29:42.016 18:44:27 bdev_raid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:42.016 18:44:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:42.016 ************************************ 00:29:42.016 END TEST bdev_raid 00:29:42.016 ************************************ 00:29:42.016 18:44:27 -- common/autotest_common.sh@1142 -- # return 0 00:29:42.016 18:44:27 -- spdk/autotest.sh@191 -- # run_test bdevperf_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:29:42.016 18:44:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:42.016 18:44:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:42.016 18:44:27 -- common/autotest_common.sh@10 -- # set +x 00:29:42.016 ************************************ 00:29:42.016 START TEST bdevperf_config 00:29:42.016 ************************************ 00:29:42.016 18:44:27 bdevperf_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:29:42.016 * Looking for test storage... 00:29:42.016 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/common.sh 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:42.016 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:42.016 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:42.016 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:42.016 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:29:42.016 18:44:27 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:29:42.017 18:44:27 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:29:42.017 18:44:27 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:29:42.017 18:44:27 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:42.017 00:29:42.017 18:44:27 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:42.017 18:44:27 bdevperf_config -- bdevperf/test_config.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:29:45.303 18:44:30 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-15 18:44:27.559323] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:29:45.303 [2024-07-15 18:44:27.559366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2959676 ] 00:29:45.303 Using job config with 4 jobs 00:29:45.303 [2024-07-15 18:44:27.659263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.303 [2024-07-15 18:44:27.766881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.303 cpumask for '\''job0'\'' is too big 00:29:45.303 cpumask for '\''job1'\'' is too big 00:29:45.303 cpumask for '\''job2'\'' is too big 00:29:45.303 cpumask for '\''job3'\'' is too big 00:29:45.303 Running I/O for 2 seconds... 00:29:45.303 00:29:45.303 Latency(us) 00:29:45.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:45.303 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:45.303 Malloc0 : 2.02 21841.16 21.33 0.00 0.00 11706.65 1997.29 17725.93 00:29:45.303 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:45.304 Malloc0 : 2.02 21819.22 21.31 0.00 0.00 11688.97 1997.29 15728.64 00:29:45.304 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:45.304 Malloc0 : 2.03 21860.12 21.35 0.00 0.00 11639.10 1997.29 13731.35 00:29:45.304 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:45.304 Malloc0 : 2.03 21838.53 21.33 0.00 0.00 11621.81 1997.29 12046.14 00:29:45.304 =================================================================================================================== 00:29:45.304 Total : 87359.04 85.31 0.00 0.00 11664.04 1997.29 17725.93' 00:29:45.304 18:44:30 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-15 18:44:27.559323] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:29:45.304 [2024-07-15 18:44:27.559366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2959676 ] 00:29:45.304 Using job config with 4 jobs 00:29:45.304 [2024-07-15 18:44:27.659263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.304 [2024-07-15 18:44:27.766881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.304 cpumask for '\''job0'\'' is too big 00:29:45.304 cpumask for '\''job1'\'' is too big 00:29:45.304 cpumask for '\''job2'\'' is too big 00:29:45.304 cpumask for '\''job3'\'' is too big 00:29:45.304 Running I/O for 2 seconds... 00:29:45.304 00:29:45.304 Latency(us) 00:29:45.304 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:45.304 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:45.304 Malloc0 : 2.02 21841.16 21.33 0.00 0.00 11706.65 1997.29 17725.93 00:29:45.304 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:45.304 Malloc0 : 2.02 21819.22 21.31 0.00 0.00 11688.97 1997.29 15728.64 00:29:45.304 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:45.304 Malloc0 : 2.03 21860.12 21.35 0.00 0.00 11639.10 1997.29 13731.35 00:29:45.304 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:45.304 Malloc0 : 2.03 21838.53 21.33 0.00 0.00 11621.81 1997.29 12046.14 00:29:45.304 =================================================================================================================== 00:29:45.304 Total : 87359.04 85.31 0.00 0.00 11664.04 1997.29 17725.93' 00:29:45.304 18:44:30 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-15 18:44:27.559323] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:29:45.304 [2024-07-15 18:44:27.559366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2959676 ] 00:29:45.304 Using job config with 4 jobs 00:29:45.304 [2024-07-15 18:44:27.659263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.304 [2024-07-15 18:44:27.766881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.304 cpumask for '\''job0'\'' is too big 00:29:45.304 cpumask for '\''job1'\'' is too big 00:29:45.304 cpumask for '\''job2'\'' is too big 00:29:45.304 cpumask for '\''job3'\'' is too big 00:29:45.304 Running I/O for 2 seconds... 00:29:45.304 00:29:45.304 Latency(us) 00:29:45.304 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:45.304 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:45.304 Malloc0 : 2.02 21841.16 21.33 0.00 0.00 11706.65 1997.29 17725.93 00:29:45.304 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:45.304 Malloc0 : 2.02 21819.22 21.31 0.00 0.00 11688.97 1997.29 15728.64 00:29:45.304 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:45.304 Malloc0 : 2.03 21860.12 21.35 0.00 0.00 11639.10 1997.29 13731.35 00:29:45.304 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:45.304 Malloc0 : 2.03 21838.53 21.33 0.00 0.00 11621.81 1997.29 12046.14 00:29:45.304 =================================================================================================================== 00:29:45.304 Total : 87359.04 85.31 0.00 0.00 11664.04 1997.29 17725.93' 00:29:45.304 18:44:30 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:29:45.304 18:44:30 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:29:45.304 18:44:30 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:29:45.304 18:44:30 bdevperf_config -- bdevperf/test_config.sh@25 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -C -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:29:45.304 [2024-07-15 18:44:30.231444] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:29:45.304 [2024-07-15 18:44:30.231504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2960071 ] 00:29:45.304 [2024-07-15 18:44:30.341262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.304 [2024-07-15 18:44:30.447761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.304 cpumask for 'job0' is too big 00:29:45.304 cpumask for 'job1' is too big 00:29:45.304 cpumask for 'job2' is too big 00:29:45.304 cpumask for 'job3' is too big 00:29:47.838 18:44:32 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:29:47.838 Running I/O for 2 seconds... 00:29:47.838 00:29:47.838 Latency(us) 00:29:47.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:47.838 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:47.838 Malloc0 : 2.01 22003.78 21.49 0.00 0.00 11625.21 2028.50 17725.93 00:29:47.838 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:47.838 Malloc0 : 2.02 22013.13 21.50 0.00 0.00 11592.28 1989.49 15666.22 00:29:47.838 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:47.838 Malloc0 : 2.03 21991.25 21.48 0.00 0.00 11575.35 1981.68 13731.35 00:29:47.838 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:47.838 Malloc0 : 2.03 21969.31 21.45 0.00 0.00 11558.76 1981.68 12046.14 00:29:47.838 =================================================================================================================== 00:29:47.838 Total : 87977.46 85.92 0.00 0.00 11587.85 1981.68 17725.93' 00:29:47.838 18:44:32 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:29:47.838 18:44:32 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:29:47.838 18:44:32 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:29:47.838 18:44:32 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:29:47.838 18:44:32 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:29:47.838 18:44:32 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:29:47.838 18:44:32 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:29:47.838 18:44:32 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:29:47.838 18:44:32 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:47.838 00:29:47.838 18:44:32 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:47.838 18:44:32 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:29:47.838 18:44:32 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:29:47.838 18:44:32 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:29:47.838 18:44:32 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:29:47.838 18:44:32 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:29:47.838 18:44:32 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:29:47.838 18:44:32 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:47.838 00:29:47.838 18:44:32 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:47.838 18:44:32 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:29:47.838 18:44:32 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:29:47.838 18:44:32 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:29:47.838 18:44:32 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:29:47.838 18:44:32 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:29:47.838 18:44:32 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:29:47.838 18:44:32 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:47.838 00:29:47.838 18:44:32 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:47.838 18:44:32 bdevperf_config -- bdevperf/test_config.sh@32 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:29:50.372 18:44:35 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-15 18:44:32.914724] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:29:50.372 [2024-07-15 18:44:32.914783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2960425 ] 00:29:50.372 Using job config with 3 jobs 00:29:50.372 [2024-07-15 18:44:33.027107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.372 [2024-07-15 18:44:33.139791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.372 cpumask for '\''job0'\'' is too big 00:29:50.372 cpumask for '\''job1'\'' is too big 00:29:50.372 cpumask for '\''job2'\'' is too big 00:29:50.372 Running I/O for 2 seconds... 00:29:50.372 00:29:50.372 Latency(us) 00:29:50.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.372 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:50.372 Malloc0 : 2.01 29536.17 28.84 0.00 0.00 8652.67 1966.08 12607.88 00:29:50.372 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:50.372 Malloc0 : 2.02 29548.68 28.86 0.00 0.00 8628.53 1934.87 10673.01 00:29:50.372 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:50.372 Malloc0 : 2.02 29519.42 28.83 0.00 0.00 8616.67 1934.87 8987.79 00:29:50.372 =================================================================================================================== 00:29:50.372 Total : 88604.26 86.53 0.00 0.00 8632.60 1934.87 12607.88' 00:29:50.372 18:44:35 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-15 18:44:32.914724] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:29:50.372 [2024-07-15 18:44:32.914783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2960425 ] 00:29:50.372 Using job config with 3 jobs 00:29:50.372 [2024-07-15 18:44:33.027107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.372 [2024-07-15 18:44:33.139791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.372 cpumask for '\''job0'\'' is too big 00:29:50.372 cpumask for '\''job1'\'' is too big 00:29:50.372 cpumask for '\''job2'\'' is too big 00:29:50.372 Running I/O for 2 seconds... 00:29:50.372 00:29:50.372 Latency(us) 00:29:50.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.372 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:50.372 Malloc0 : 2.01 29536.17 28.84 0.00 0.00 8652.67 1966.08 12607.88 00:29:50.372 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:50.372 Malloc0 : 2.02 29548.68 28.86 0.00 0.00 8628.53 1934.87 10673.01 00:29:50.372 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:50.372 Malloc0 : 2.02 29519.42 28.83 0.00 0.00 8616.67 1934.87 8987.79 00:29:50.372 =================================================================================================================== 00:29:50.372 Total : 88604.26 86.53 0.00 0.00 8632.60 1934.87 12607.88' 00:29:50.372 18:44:35 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-15 18:44:32.914724] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:29:50.372 [2024-07-15 18:44:32.914783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2960425 ] 00:29:50.372 Using job config with 3 jobs 00:29:50.372 [2024-07-15 18:44:33.027107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.372 [2024-07-15 18:44:33.139791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.372 cpumask for '\''job0'\'' is too big 00:29:50.372 cpumask for '\''job1'\'' is too big 00:29:50.372 cpumask for '\''job2'\'' is too big 00:29:50.372 Running I/O for 2 seconds... 00:29:50.372 00:29:50.372 Latency(us) 00:29:50.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.372 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:50.372 Malloc0 : 2.01 29536.17 28.84 0.00 0.00 8652.67 1966.08 12607.88 00:29:50.372 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:50.372 Malloc0 : 2.02 29548.68 28.86 0.00 0.00 8628.53 1934.87 10673.01 00:29:50.372 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:50.372 Malloc0 : 2.02 29519.42 28.83 0.00 0.00 8616.67 1934.87 8987.79 00:29:50.372 =================================================================================================================== 00:29:50.372 Total : 88604.26 86.53 0.00 0.00 8632.60 1934.87 12607.88' 00:29:50.372 18:44:35 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:29:50.372 18:44:35 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:29:50.372 18:44:35 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:29:50.372 18:44:35 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:29:50.372 18:44:35 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:29:50.372 18:44:35 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:29:50.372 18:44:35 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:29:50.372 18:44:35 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:29:50.372 18:44:35 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:29:50.372 18:44:35 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:29:50.372 18:44:35 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:29:50.372 18:44:35 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:29:50.372 18:44:35 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:50.372 00:29:50.372 18:44:35 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:50.372 18:44:35 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:29:50.372 18:44:35 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:29:50.372 18:44:35 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:29:50.372 18:44:35 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:29:50.373 18:44:35 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:29:50.373 18:44:35 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:29:50.373 18:44:35 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:50.373 00:29:50.373 18:44:35 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:50.373 18:44:35 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:29:50.373 18:44:35 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:29:50.373 18:44:35 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:29:50.373 18:44:35 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:29:50.373 18:44:35 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:29:50.373 18:44:35 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:29:50.373 18:44:35 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:50.373 00:29:50.373 18:44:35 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:50.373 18:44:35 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:29:50.373 18:44:35 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:29:50.373 18:44:35 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:29:50.373 18:44:35 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:29:50.373 18:44:35 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:29:50.373 18:44:35 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:29:50.373 18:44:35 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:50.373 00:29:50.373 18:44:35 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:50.373 18:44:35 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:29:50.373 18:44:35 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:29:50.373 18:44:35 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:29:50.373 18:44:35 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:29:50.373 18:44:35 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:29:50.373 18:44:35 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:29:50.373 18:44:35 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:50.373 00:29:50.373 18:44:35 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:50.373 18:44:35 bdevperf_config -- bdevperf/test_config.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:29:52.909 18:44:38 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-15 18:44:35.614195] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:29:52.909 [2024-07-15 18:44:35.614254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2960780 ] 00:29:52.909 Using job config with 4 jobs 00:29:52.909 [2024-07-15 18:44:35.726714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.909 [2024-07-15 18:44:35.835935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.909 cpumask for '\''job0'\'' is too big 00:29:52.909 cpumask for '\''job1'\'' is too big 00:29:52.909 cpumask for '\''job2'\'' is too big 00:29:52.909 cpumask for '\''job3'\'' is too big 00:29:52.909 Running I/O for 2 seconds... 00:29:52.909 00:29:52.909 Latency(us) 00:29:52.909 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:52.909 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:52.909 Malloc0 : 2.03 10862.48 10.61 0.00 0.00 23557.35 4088.20 35951.18 00:29:52.909 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:52.909 Malloc1 : 2.03 10851.36 10.60 0.00 0.00 23556.46 5024.43 35951.18 00:29:52.909 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:52.909 Malloc0 : 2.05 10871.72 10.62 0.00 0.00 23420.38 4056.99 31831.77 00:29:52.909 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:52.909 Malloc1 : 2.05 10860.77 10.61 0.00 0.00 23419.68 5024.43 31831.77 00:29:52.909 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:52.909 Malloc0 : 2.05 10850.16 10.60 0.00 0.00 23351.95 4056.99 27837.20 00:29:52.909 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:52.909 Malloc1 : 2.05 10839.32 10.59 0.00 0.00 23350.19 5024.43 27837.20 00:29:52.909 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:52.909 Malloc0 : 2.06 10828.67 10.57 0.00 0.00 23283.92 4056.99 23967.45 00:29:52.909 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:52.909 Malloc1 : 2.06 10817.79 10.56 0.00 0.00 23285.17 5024.43 23967.45 00:29:52.909 =================================================================================================================== 00:29:52.909 Total : 86782.27 84.75 0.00 0.00 23402.69 4056.99 35951.18' 00:29:52.909 18:44:38 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-15 18:44:35.614195] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:29:52.909 [2024-07-15 18:44:35.614254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2960780 ] 00:29:52.909 Using job config with 4 jobs 00:29:52.909 [2024-07-15 18:44:35.726714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.909 [2024-07-15 18:44:35.835935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.909 cpumask for '\''job0'\'' is too big 00:29:52.909 cpumask for '\''job1'\'' is too big 00:29:52.909 cpumask for '\''job2'\'' is too big 00:29:52.909 cpumask for '\''job3'\'' is too big 00:29:52.909 Running I/O for 2 seconds... 00:29:52.909 00:29:52.909 Latency(us) 00:29:52.909 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:52.909 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:52.909 Malloc0 : 2.03 10862.48 10.61 0.00 0.00 23557.35 4088.20 35951.18 00:29:52.909 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:52.909 Malloc1 : 2.03 10851.36 10.60 0.00 0.00 23556.46 5024.43 35951.18 00:29:52.909 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:52.909 Malloc0 : 2.05 10871.72 10.62 0.00 0.00 23420.38 4056.99 31831.77 00:29:52.909 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:52.909 Malloc1 : 2.05 10860.77 10.61 0.00 0.00 23419.68 5024.43 31831.77 00:29:52.909 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:52.909 Malloc0 : 2.05 10850.16 10.60 0.00 0.00 23351.95 4056.99 27837.20 00:29:52.909 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:52.910 Malloc1 : 2.05 10839.32 10.59 0.00 0.00 23350.19 5024.43 27837.20 00:29:52.910 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:52.910 Malloc0 : 2.06 10828.67 10.57 0.00 0.00 23283.92 4056.99 23967.45 00:29:52.910 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:52.910 Malloc1 : 2.06 10817.79 10.56 0.00 0.00 23285.17 5024.43 23967.45 00:29:52.910 =================================================================================================================== 00:29:52.910 Total : 86782.27 84.75 0.00 0.00 23402.69 4056.99 35951.18' 00:29:52.910 18:44:38 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-15 18:44:35.614195] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:29:52.910 [2024-07-15 18:44:35.614254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2960780 ] 00:29:52.910 Using job config with 4 jobs 00:29:52.910 [2024-07-15 18:44:35.726714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.910 [2024-07-15 18:44:35.835935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.910 cpumask for '\''job0'\'' is too big 00:29:52.910 cpumask for '\''job1'\'' is too big 00:29:52.910 cpumask for '\''job2'\'' is too big 00:29:52.910 cpumask for '\''job3'\'' is too big 00:29:52.910 Running I/O for 2 seconds... 00:29:52.910 00:29:52.910 Latency(us) 00:29:52.910 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:52.910 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:52.910 Malloc0 : 2.03 10862.48 10.61 0.00 0.00 23557.35 4088.20 35951.18 00:29:52.910 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:52.910 Malloc1 : 2.03 10851.36 10.60 0.00 0.00 23556.46 5024.43 35951.18 00:29:52.910 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:52.910 Malloc0 : 2.05 10871.72 10.62 0.00 0.00 23420.38 4056.99 31831.77 00:29:52.910 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:52.910 Malloc1 : 2.05 10860.77 10.61 0.00 0.00 23419.68 5024.43 31831.77 00:29:52.910 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:52.910 Malloc0 : 2.05 10850.16 10.60 0.00 0.00 23351.95 4056.99 27837.20 00:29:52.910 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:52.910 Malloc1 : 2.05 10839.32 10.59 0.00 0.00 23350.19 5024.43 27837.20 00:29:52.910 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:52.910 Malloc0 : 2.06 10828.67 10.57 0.00 0.00 23283.92 4056.99 23967.45 00:29:52.910 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:52.910 Malloc1 : 2.06 10817.79 10.56 0.00 0.00 23285.17 5024.43 23967.45 00:29:52.910 =================================================================================================================== 00:29:52.910 Total : 86782.27 84.75 0.00 0.00 23402.69 4056.99 35951.18' 00:29:52.910 18:44:38 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:29:52.910 18:44:38 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:29:52.910 18:44:38 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:29:52.910 18:44:38 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:29:52.910 18:44:38 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:29:52.910 18:44:38 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:29:52.910 00:29:52.910 real 0m10.893s 00:29:52.910 user 0m9.805s 00:29:52.910 sys 0m0.946s 00:29:52.910 18:44:38 bdevperf_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:52.910 18:44:38 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:29:52.910 ************************************ 00:29:52.910 END TEST bdevperf_config 00:29:52.910 ************************************ 00:29:52.910 18:44:38 -- common/autotest_common.sh@1142 -- # return 0 00:29:52.910 18:44:38 -- spdk/autotest.sh@192 -- # uname -s 00:29:52.910 18:44:38 -- spdk/autotest.sh@192 -- # [[ Linux == Linux ]] 00:29:52.910 18:44:38 -- spdk/autotest.sh@193 -- # run_test reactor_set_interrupt /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:29:52.910 18:44:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:52.910 18:44:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:52.910 18:44:38 -- common/autotest_common.sh@10 -- # set +x 00:29:52.910 ************************************ 00:29:52.910 START TEST reactor_set_interrupt 00:29:52.910 ************************************ 00:29:52.910 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:29:52.910 * Looking for test storage... 00:29:52.910 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:29:52.910 18:44:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:29:52.910 18:44:38 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:29:52.910 18:44:38 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:29:52.910 18:44:38 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:29:52.910 18:44:38 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:29:52.910 18:44:38 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:29:52.910 18:44:38 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:29:52.910 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:29:52.910 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e 00:29:52.910 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:29:52.910 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob 00:29:52.910 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:29:52.910 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:29:52.910 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:29:52.910 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_CET=n 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:29:52.910 18:44:38 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:29:52.911 18:44:38 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:29:52.911 18:44:38 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:29:52.911 18:44:38 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:29:52.911 18:44:38 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:29:52.911 18:44:38 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:29:52.911 18:44:38 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:29:52.911 18:44:38 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:29:52.911 18:44:38 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:29:52.911 18:44:38 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:29:52.911 18:44:38 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:29:52.911 18:44:38 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:29:52.911 18:44:38 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:29:52.911 18:44:38 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:29:52.911 18:44:38 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:29:52.911 18:44:38 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:29:52.911 18:44:38 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:29:52.911 18:44:38 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:29:52.911 18:44:38 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:29:52.911 18:44:38 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_FC=n 00:29:52.911 18:44:38 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:29:52.911 18:44:38 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:29:52.911 18:44:38 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:29:52.911 18:44:38 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:29:52.911 18:44:38 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:29:52.911 18:44:38 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:29:52.911 18:44:38 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:29:52.911 18:44:38 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:29:52.911 18:44:38 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:29:52.911 18:44:38 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:29:52.911 18:44:38 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:29:52.911 18:44:38 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:29:52.911 18:44:38 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_URING=n 00:29:52.911 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:29:52.911 18:44:38 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:29:52.911 18:44:38 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:29:53.173 18:44:38 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:29:53.173 18:44:38 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:29:53.173 18:44:38 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:29:53.173 18:44:38 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:29:53.173 18:44:38 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:29:53.173 18:44:38 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:29:53.173 18:44:38 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:29:53.173 18:44:38 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:29:53.173 18:44:38 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:29:53.173 18:44:38 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:29:53.173 18:44:38 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:29:53.173 18:44:38 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:29:53.173 18:44:38 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:29:53.173 #define SPDK_CONFIG_H 00:29:53.173 #define SPDK_CONFIG_APPS 1 00:29:53.173 #define SPDK_CONFIG_ARCH native 00:29:53.173 #undef SPDK_CONFIG_ASAN 00:29:53.173 #undef SPDK_CONFIG_AVAHI 00:29:53.173 #undef SPDK_CONFIG_CET 00:29:53.173 #define SPDK_CONFIG_COVERAGE 1 00:29:53.173 #define SPDK_CONFIG_CROSS_PREFIX 00:29:53.173 #define SPDK_CONFIG_CRYPTO 1 00:29:53.173 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:29:53.173 #undef SPDK_CONFIG_CUSTOMOCF 00:29:53.173 #undef SPDK_CONFIG_DAOS 00:29:53.173 #define SPDK_CONFIG_DAOS_DIR 00:29:53.173 #define SPDK_CONFIG_DEBUG 1 00:29:53.173 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:29:53.173 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:29:53.173 #define SPDK_CONFIG_DPDK_INC_DIR 00:29:53.173 #define SPDK_CONFIG_DPDK_LIB_DIR 00:29:53.173 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:29:53.173 #undef SPDK_CONFIG_DPDK_UADK 00:29:53.173 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:29:53.173 #define SPDK_CONFIG_EXAMPLES 1 00:29:53.173 #undef SPDK_CONFIG_FC 00:29:53.173 #define SPDK_CONFIG_FC_PATH 00:29:53.173 #define SPDK_CONFIG_FIO_PLUGIN 1 00:29:53.173 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:29:53.173 #undef SPDK_CONFIG_FUSE 00:29:53.173 #undef SPDK_CONFIG_FUZZER 00:29:53.173 #define SPDK_CONFIG_FUZZER_LIB 00:29:53.173 #undef SPDK_CONFIG_GOLANG 00:29:53.173 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:29:53.173 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:29:53.173 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:29:53.173 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:29:53.173 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:29:53.173 #undef SPDK_CONFIG_HAVE_LIBBSD 00:29:53.173 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:29:53.173 #define SPDK_CONFIG_IDXD 1 00:29:53.173 #define SPDK_CONFIG_IDXD_KERNEL 1 00:29:53.173 #define SPDK_CONFIG_IPSEC_MB 1 00:29:53.173 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:29:53.173 #define SPDK_CONFIG_ISAL 1 00:29:53.173 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:29:53.173 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:29:53.173 #define SPDK_CONFIG_LIBDIR 00:29:53.173 #undef SPDK_CONFIG_LTO 00:29:53.173 #define SPDK_CONFIG_MAX_LCORES 128 00:29:53.173 #define SPDK_CONFIG_NVME_CUSE 1 00:29:53.173 #undef SPDK_CONFIG_OCF 00:29:53.173 #define SPDK_CONFIG_OCF_PATH 00:29:53.173 #define SPDK_CONFIG_OPENSSL_PATH 00:29:53.173 #undef SPDK_CONFIG_PGO_CAPTURE 00:29:53.173 #define SPDK_CONFIG_PGO_DIR 00:29:53.173 #undef SPDK_CONFIG_PGO_USE 00:29:53.173 #define SPDK_CONFIG_PREFIX /usr/local 00:29:53.173 #undef SPDK_CONFIG_RAID5F 00:29:53.173 #undef SPDK_CONFIG_RBD 00:29:53.173 #define SPDK_CONFIG_RDMA 1 00:29:53.173 #define SPDK_CONFIG_RDMA_PROV verbs 00:29:53.173 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:29:53.173 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:29:53.173 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:29:53.173 #define SPDK_CONFIG_SHARED 1 00:29:53.173 #undef SPDK_CONFIG_SMA 00:29:53.173 #define SPDK_CONFIG_TESTS 1 00:29:53.173 #undef SPDK_CONFIG_TSAN 00:29:53.173 #define SPDK_CONFIG_UBLK 1 00:29:53.173 #define SPDK_CONFIG_UBSAN 1 00:29:53.173 #undef SPDK_CONFIG_UNIT_TESTS 00:29:53.173 #undef SPDK_CONFIG_URING 00:29:53.173 #define SPDK_CONFIG_URING_PATH 00:29:53.173 #undef SPDK_CONFIG_URING_ZNS 00:29:53.173 #undef SPDK_CONFIG_USDT 00:29:53.173 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:29:53.173 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:29:53.173 #undef SPDK_CONFIG_VFIO_USER 00:29:53.173 #define SPDK_CONFIG_VFIO_USER_DIR 00:29:53.173 #define SPDK_CONFIG_VHOST 1 00:29:53.173 #define SPDK_CONFIG_VIRTIO 1 00:29:53.173 #undef SPDK_CONFIG_VTUNE 00:29:53.173 #define SPDK_CONFIG_VTUNE_DIR 00:29:53.173 #define SPDK_CONFIG_WERROR 1 00:29:53.173 #define SPDK_CONFIG_WPDK_DIR 00:29:53.173 #undef SPDK_CONFIG_XNVME 00:29:53.173 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:29:53.173 18:44:38 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:29:53.173 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:29:53.173 18:44:38 reactor_set_interrupt -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:53.173 18:44:38 reactor_set_interrupt -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:53.173 18:44:38 reactor_set_interrupt -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:53.173 18:44:38 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.173 18:44:38 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.173 18:44:38 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.173 18:44:38 reactor_set_interrupt -- paths/export.sh@5 -- # export PATH 00:29:53.173 18:44:38 reactor_set_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.173 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:29:53.173 18:44:38 reactor_set_interrupt -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:29:53.173 18:44:38 reactor_set_interrupt -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:29:53.173 18:44:38 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:29:53.173 18:44:38 reactor_set_interrupt -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:29:53.173 18:44:38 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:29:53.173 18:44:38 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A 00:29:53.174 18:44:38 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:29:53.174 18:44:38 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:29:53.174 18:44:38 reactor_set_interrupt -- pm/common@68 -- # uname -s 00:29:53.174 18:44:38 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux 00:29:53.174 18:44:38 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:29:53.174 18:44:38 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:29:53.174 18:44:38 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:29:53.174 18:44:38 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:29:53.174 18:44:38 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:29:53.174 18:44:38 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:29:53.174 18:44:38 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]= 00:29:53.174 18:44:38 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E' 00:29:53.174 18:44:38 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:29:53.174 18:44:38 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:29:53.174 18:44:38 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]] 00:29:53.174 18:44:38 reactor_set_interrupt -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:29:53.174 18:44:38 reactor_set_interrupt -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:29:53.174 18:44:38 reactor_set_interrupt -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:29:53.174 18:44:38 reactor_set_interrupt -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:29:53.174 18:44:38 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@58 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@62 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@64 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@66 -- # : 1 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@68 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@70 -- # : 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@72 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@74 -- # : 1 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@76 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@78 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@80 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@82 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@84 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@86 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@88 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@90 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@92 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@94 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@96 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@98 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@100 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@102 -- # : rdma 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@104 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@106 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@108 -- # : 1 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@110 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@112 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@114 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@116 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@118 -- # : 1 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@120 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@122 -- # : 1 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@124 -- # : 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@126 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@128 -- # : 1 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@130 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@132 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@134 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@136 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@138 -- # : 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@140 -- # : true 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@142 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@144 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@146 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@148 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@150 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@152 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@154 -- # : 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@156 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@158 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@160 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@162 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@164 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@167 -- # : 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@169 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@171 -- # : 0 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:29:53.174 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@200 -- # cat 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@263 -- # export valgrind= 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@263 -- # valgrind= 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@269 -- # uname -s 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@272 -- # [[ 1 -eq 1 ]] 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@276 -- # export HUGE_EVEN_ALLOC=yes 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@276 -- # HUGE_EVEN_ALLOC=yes 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@279 -- # MAKE=make 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j88 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@299 -- # TEST_MODE= 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@318 -- # [[ -z 2961240 ]] 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@318 -- # kill -0 2961240 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@331 -- # local mount target_dir 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.LtKJlV 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.LtKJlV/tests/interrupt /tmp/spdk.LtKJlV 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@327 -- # df -T 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=954421248 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4330008576 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=88504778752 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=94507933696 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=6003154944 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=47250591744 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=47253966848 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=18891902976 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=18901590016 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=9687040 00:29:53.175 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=47253368832 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=47253966848 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=598016 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=9450786816 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=9450790912 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:29:53.176 * Looking for test storage... 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@368 -- # local target_space new_size 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@372 -- # mount=/ 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@374 -- # target_space=88504778752 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@381 -- # new_size=8217747456 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:29:53.176 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@389 -- # return 0 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # set -o errtrace 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@1687 -- # true 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@1689 -- # xtrace_fd 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x 00:29:53.176 18:44:38 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:29:53.176 18:44:38 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:53.176 18:44:38 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:29:53.176 18:44:38 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:29:53.176 18:44:38 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:29:53.176 18:44:38 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:29:53.176 18:44:38 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:29:53.176 18:44:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:29:53.176 18:44:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:29:53.176 18:44:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:29:53.176 18:44:38 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.176 18:44:38 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:29:53.176 18:44:38 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=2961372 00:29:53.176 18:44:38 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:53.176 18:44:38 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 2961372 /var/tmp/spdk.sock 00:29:53.176 18:44:38 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@829 -- # '[' -z 2961372 ']' 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:53.176 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:53.176 [2024-07-15 18:44:38.613724] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:29:53.176 [2024-07-15 18:44:38.613784] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2961372 ] 00:29:53.176 [2024-07-15 18:44:38.710151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:53.435 [2024-07-15 18:44:38.806744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.435 [2024-07-15 18:44:38.806848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:53.435 [2024-07-15 18:44:38.806848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.435 [2024-07-15 18:44:38.876736] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:53.435 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:53.435 18:44:38 reactor_set_interrupt -- common/autotest_common.sh@862 -- # return 0 00:29:53.435 18:44:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:29:53.435 18:44:38 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:53.694 Malloc0 00:29:53.694 Malloc1 00:29:53.694 Malloc2 00:29:53.694 18:44:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:29:53.694 18:44:39 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:29:53.694 18:44:39 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:29:53.694 18:44:39 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:29:53.694 5000+0 records in 00:29:53.694 5000+0 records out 00:29:53.694 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0179148 s, 572 MB/s 00:29:53.694 18:44:39 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:29:53.952 AIO0 00:29:53.953 18:44:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 2961372 00:29:53.953 18:44:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 2961372 without_thd 00:29:53.953 18:44:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=2961372 00:29:53.953 18:44:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:29:53.953 18:44:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:29:53.953 18:44:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:29:54.211 18:44:39 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:29:54.211 18:44:39 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:29:54.211 18:44:39 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:29:54.211 18:44:39 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:54.211 18:44:39 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:29:54.211 18:44:39 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:54.469 18:44:39 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:29:54.469 18:44:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:29:54.469 18:44:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:29:54.469 18:44:39 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:29:54.469 18:44:39 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:29:54.469 18:44:39 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:29:54.469 18:44:39 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:54.469 18:44:39 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:29:54.469 18:44:39 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:29:54.728 spdk_thread ids are 1 on reactor0. 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2961372 0 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2961372 0 idle 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2961372 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2961372 -w 256 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2961372 root 20 0 128.2g 36608 23232 S 0.0 0.0 0:00.34 reactor_0' 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2961372 root 20 0 128.2g 36608 23232 S 0.0 0.0 0:00.34 reactor_0 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2961372 1 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2961372 1 idle 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2961372 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:54.728 18:44:40 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:54.729 18:44:40 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:54.729 18:44:40 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2961372 -w 256 00:29:54.729 18:44:40 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:29:54.987 18:44:40 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2961417 root 20 0 128.2g 36608 23232 S 0.0 0.0 0:00.00 reactor_1' 00:29:54.987 18:44:40 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2961417 root 20 0 128.2g 36608 23232 S 0.0 0.0 0:00.00 reactor_1 00:29:54.987 18:44:40 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:54.987 18:44:40 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:54.987 18:44:40 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:29:54.987 18:44:40 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:29:54.987 18:44:40 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:29:54.987 18:44:40 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:29:54.987 18:44:40 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:29:54.987 18:44:40 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:54.987 18:44:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:29:54.987 18:44:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2961372 2 00:29:54.987 18:44:40 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2961372 2 idle 00:29:54.987 18:44:40 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2961372 00:29:54.987 18:44:40 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:29:54.987 18:44:40 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:54.987 18:44:40 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:29:54.987 18:44:40 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:29:54.987 18:44:40 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:54.987 18:44:40 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:54.987 18:44:40 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:54.987 18:44:40 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2961372 -w 256 00:29:54.987 18:44:40 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:29:54.987 18:44:40 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2961419 root 20 0 128.2g 36608 23232 S 0.0 0.0 0:00.00 reactor_2' 00:29:55.246 18:44:40 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2961419 root 20 0 128.2g 36608 23232 S 0.0 0.0 0:00.00 reactor_2 00:29:55.246 18:44:40 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:55.246 18:44:40 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:55.246 18:44:40 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:29:55.246 18:44:40 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:29:55.246 18:44:40 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:29:55.246 18:44:40 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:29:55.246 18:44:40 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:29:55.246 18:44:40 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:55.246 18:44:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:29:55.246 18:44:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:29:55.246 18:44:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:29:55.505 [2024-07-15 18:44:40.843669] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:55.505 18:44:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:29:55.763 [2024-07-15 18:44:41.103397] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:29:55.763 [2024-07-15 18:44:41.103731] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:29:55.763 18:44:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:29:56.021 [2024-07-15 18:44:41.359396] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:29:56.021 [2024-07-15 18:44:41.359640] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:29:56.021 18:44:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:29:56.021 18:44:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 2961372 0 00:29:56.021 18:44:41 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 2961372 0 busy 00:29:56.021 18:44:41 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2961372 00:29:56.021 18:44:41 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:56.021 18:44:41 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:29:56.021 18:44:41 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:29:56.021 18:44:41 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:56.021 18:44:41 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:56.021 18:44:41 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:56.021 18:44:41 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2961372 -w 256 00:29:56.021 18:44:41 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:29:56.022 18:44:41 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2961372 root 20 0 128.2g 36608 23232 R 99.9 0.0 0:00.78 reactor_0' 00:29:56.022 18:44:41 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2961372 root 20 0 128.2g 36608 23232 R 99.9 0.0 0:00.78 reactor_0 00:29:56.022 18:44:41 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:56.022 18:44:41 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:56.022 18:44:41 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:29:56.022 18:44:41 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:29:56.022 18:44:41 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:29:56.022 18:44:41 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:29:56.022 18:44:41 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:29:56.022 18:44:41 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:56.022 18:44:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:29:56.022 18:44:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 2961372 2 00:29:56.022 18:44:41 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 2961372 2 busy 00:29:56.022 18:44:41 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2961372 00:29:56.022 18:44:41 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:29:56.022 18:44:41 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:29:56.022 18:44:41 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:29:56.022 18:44:41 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:56.022 18:44:41 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:56.022 18:44:41 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:56.022 18:44:41 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2961372 -w 256 00:29:56.022 18:44:41 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:29:56.281 18:44:41 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2961419 root 20 0 128.2g 36608 23232 R 99.9 0.0 0:00.35 reactor_2' 00:29:56.281 18:44:41 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2961419 root 20 0 128.2g 36608 23232 R 99.9 0.0 0:00.35 reactor_2 00:29:56.281 18:44:41 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:56.281 18:44:41 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:56.281 18:44:41 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:29:56.281 18:44:41 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:29:56.281 18:44:41 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:29:56.281 18:44:41 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:29:56.281 18:44:41 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:29:56.281 18:44:41 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:56.281 18:44:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:29:56.539 [2024-07-15 18:44:41.967376] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:29:56.539 [2024-07-15 18:44:41.967489] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:29:56.539 18:44:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:29:56.539 18:44:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 2961372 2 00:29:56.539 18:44:42 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2961372 2 idle 00:29:56.539 18:44:42 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2961372 00:29:56.539 18:44:42 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:29:56.539 18:44:42 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:56.539 18:44:42 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:29:56.539 18:44:42 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:29:56.539 18:44:42 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:56.539 18:44:42 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:56.539 18:44:42 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:56.539 18:44:42 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2961372 -w 256 00:29:56.539 18:44:42 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:29:56.798 18:44:42 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2961419 root 20 0 128.2g 36608 23232 S 0.0 0.0 0:00.60 reactor_2' 00:29:56.798 18:44:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2961419 root 20 0 128.2g 36608 23232 S 0.0 0.0 0:00.60 reactor_2 00:29:56.798 18:44:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:56.798 18:44:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:56.798 18:44:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:29:56.798 18:44:42 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:29:56.798 18:44:42 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:29:56.798 18:44:42 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:29:56.798 18:44:42 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:29:56.798 18:44:42 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:56.798 18:44:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:29:57.056 [2024-07-15 18:44:42.411371] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:29:57.056 [2024-07-15 18:44:42.411568] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:29:57.056 18:44:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:29:57.056 18:44:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:29:57.056 18:44:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:29:57.314 [2024-07-15 18:44:42.683588] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:57.314 18:44:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 2961372 0 00:29:57.314 18:44:42 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2961372 0 idle 00:29:57.314 18:44:42 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2961372 00:29:57.314 18:44:42 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:57.314 18:44:42 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:57.314 18:44:42 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:29:57.314 18:44:42 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:29:57.314 18:44:42 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:57.314 18:44:42 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:57.314 18:44:42 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:57.314 18:44:42 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2961372 -w 256 00:29:57.314 18:44:42 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:29:57.573 18:44:42 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2961372 root 20 0 128.2g 36608 23232 S 6.7 0.0 0:01.65 reactor_0' 00:29:57.573 18:44:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2961372 root 20 0 128.2g 36608 23232 S 6.7 0.0 0:01.65 reactor_0 00:29:57.573 18:44:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:57.573 18:44:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:57.573 18:44:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=6.7 00:29:57.573 18:44:42 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=6 00:29:57.573 18:44:42 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:29:57.573 18:44:42 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:29:57.573 18:44:42 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 6 -gt 30 ]] 00:29:57.573 18:44:42 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:57.573 18:44:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:29:57.573 18:44:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:29:57.573 18:44:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:29:57.573 18:44:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 2961372 00:29:57.573 18:44:42 reactor_set_interrupt -- common/autotest_common.sh@948 -- # '[' -z 2961372 ']' 00:29:57.573 18:44:42 reactor_set_interrupt -- common/autotest_common.sh@952 -- # kill -0 2961372 00:29:57.573 18:44:42 reactor_set_interrupt -- common/autotest_common.sh@953 -- # uname 00:29:57.573 18:44:42 reactor_set_interrupt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:57.573 18:44:42 reactor_set_interrupt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2961372 00:29:57.573 18:44:42 reactor_set_interrupt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:57.573 18:44:42 reactor_set_interrupt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:57.573 18:44:42 reactor_set_interrupt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2961372' 00:29:57.573 killing process with pid 2961372 00:29:57.573 18:44:42 reactor_set_interrupt -- common/autotest_common.sh@967 -- # kill 2961372 00:29:57.573 18:44:42 reactor_set_interrupt -- common/autotest_common.sh@972 -- # wait 2961372 00:29:57.832 18:44:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:29:57.832 18:44:43 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:29:57.832 18:44:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:29:57.832 18:44:43 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.832 18:44:43 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:29:57.832 18:44:43 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=2962179 00:29:57.832 18:44:43 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:57.832 18:44:43 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:29:57.832 18:44:43 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 2962179 /var/tmp/spdk.sock 00:29:57.832 18:44:43 reactor_set_interrupt -- common/autotest_common.sh@829 -- # '[' -z 2962179 ']' 00:29:57.832 18:44:43 reactor_set_interrupt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.832 18:44:43 reactor_set_interrupt -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:57.832 18:44:43 reactor_set_interrupt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.832 18:44:43 reactor_set_interrupt -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:57.832 18:44:43 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:57.832 [2024-07-15 18:44:43.206561] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:29:57.832 [2024-07-15 18:44:43.206619] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2962179 ] 00:29:57.832 [2024-07-15 18:44:43.304001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:58.090 [2024-07-15 18:44:43.400699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:58.090 [2024-07-15 18:44:43.400802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:58.090 [2024-07-15 18:44:43.400803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.090 [2024-07-15 18:44:43.470807] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:58.090 18:44:43 reactor_set_interrupt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:58.090 18:44:43 reactor_set_interrupt -- common/autotest_common.sh@862 -- # return 0 00:29:58.090 18:44:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:29:58.090 18:44:43 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:58.348 Malloc0 00:29:58.348 Malloc1 00:29:58.348 Malloc2 00:29:58.348 18:44:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:29:58.348 18:44:43 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:29:58.348 18:44:43 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:29:58.348 18:44:43 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:29:58.348 5000+0 records in 00:29:58.348 5000+0 records out 00:29:58.348 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0171745 s, 596 MB/s 00:29:58.348 18:44:43 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:29:58.606 AIO0 00:29:58.606 18:44:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 2962179 00:29:58.606 18:44:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 2962179 00:29:58.606 18:44:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=2962179 00:29:58.607 18:44:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:29:58.607 18:44:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:29:58.607 18:44:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:29:58.607 18:44:44 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:29:58.607 18:44:44 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:29:58.607 18:44:44 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:29:58.607 18:44:44 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:58.607 18:44:44 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:29:58.607 18:44:44 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:58.865 18:44:44 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:29:58.865 18:44:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:29:58.865 18:44:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:29:58.865 18:44:44 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:29:58.865 18:44:44 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:29:58.865 18:44:44 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:29:58.865 18:44:44 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:58.865 18:44:44 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:29:58.865 18:44:44 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:59.124 18:44:44 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:29:59.124 18:44:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:29:59.124 18:44:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:29:59.124 spdk_thread ids are 1 on reactor0. 00:29:59.124 18:44:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:29:59.124 18:44:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2962179 0 00:29:59.124 18:44:44 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2962179 0 idle 00:29:59.124 18:44:44 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2962179 00:29:59.124 18:44:44 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:59.124 18:44:44 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:59.124 18:44:44 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:29:59.124 18:44:44 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:29:59.124 18:44:44 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:59.124 18:44:44 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:59.124 18:44:44 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:59.124 18:44:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2962179 -w 256 00:29:59.124 18:44:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:29:59.382 18:44:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2962179 root 20 0 128.2g 37312 23936 S 6.7 0.0 0:00.34 reactor_0' 00:29:59.382 18:44:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2962179 root 20 0 128.2g 37312 23936 S 6.7 0.0 0:00.34 reactor_0 00:29:59.382 18:44:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:59.382 18:44:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:59.382 18:44:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=6.7 00:29:59.382 18:44:44 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=6 00:29:59.382 18:44:44 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:29:59.382 18:44:44 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:29:59.382 18:44:44 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 6 -gt 30 ]] 00:29:59.382 18:44:44 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:59.382 18:44:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:29:59.382 18:44:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2962179 1 00:29:59.382 18:44:44 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2962179 1 idle 00:29:59.382 18:44:44 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2962179 00:29:59.382 18:44:44 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:29:59.382 18:44:44 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:59.382 18:44:44 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:29:59.382 18:44:44 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:29:59.382 18:44:44 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:59.382 18:44:44 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:59.382 18:44:44 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:59.382 18:44:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2962179 -w 256 00:29:59.382 18:44:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:29:59.641 18:44:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2962182 root 20 0 128.2g 37312 23936 S 0.0 0.0 0:00.00 reactor_1' 00:29:59.641 18:44:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2962182 root 20 0 128.2g 37312 23936 S 0.0 0.0 0:00.00 reactor_1 00:29:59.641 18:44:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:59.641 18:44:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:59.641 18:44:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:29:59.641 18:44:44 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:29:59.641 18:44:44 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:29:59.641 18:44:44 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:29:59.641 18:44:44 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:29:59.641 18:44:44 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:59.641 18:44:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:29:59.641 18:44:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2962179 2 00:29:59.641 18:44:44 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2962179 2 idle 00:29:59.641 18:44:44 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2962179 00:29:59.641 18:44:44 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:29:59.641 18:44:44 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:59.641 18:44:44 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:29:59.641 18:44:44 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:29:59.641 18:44:44 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:59.641 18:44:44 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:59.641 18:44:44 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:59.641 18:44:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2962179 -w 256 00:29:59.641 18:44:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:29:59.641 18:44:45 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2962183 root 20 0 128.2g 37312 23936 S 0.0 0.0 0:00.00 reactor_2' 00:29:59.641 18:44:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2962183 root 20 0 128.2g 37312 23936 S 0.0 0.0 0:00.00 reactor_2 00:29:59.641 18:44:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:59.641 18:44:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:59.641 18:44:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:29:59.641 18:44:45 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:29:59.641 18:44:45 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:29:59.641 18:44:45 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:29:59.641 18:44:45 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:29:59.641 18:44:45 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:59.641 18:44:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:29:59.641 18:44:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:29:59.900 [2024-07-15 18:44:45.401424] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:29:59.900 [2024-07-15 18:44:45.401590] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:29:59.900 [2024-07-15 18:44:45.401793] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:29:59.900 18:44:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:30:00.158 [2024-07-15 18:44:45.649982] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:30:00.158 [2024-07-15 18:44:45.650185] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:30:00.158 18:44:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:30:00.158 18:44:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 2962179 0 00:30:00.158 18:44:45 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 2962179 0 busy 00:30:00.158 18:44:45 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2962179 00:30:00.158 18:44:45 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:00.158 18:44:45 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:30:00.158 18:44:45 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:30:00.158 18:44:45 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:30:00.158 18:44:45 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:30:00.158 18:44:45 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:30:00.158 18:44:45 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2962179 -w 256 00:30:00.158 18:44:45 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:30:00.417 18:44:45 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2962179 root 20 0 128.2g 37312 23936 R 93.8 0.0 0:00.76 reactor_0' 00:30:00.417 18:44:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2962179 root 20 0 128.2g 37312 23936 R 93.8 0.0 0:00.76 reactor_0 00:30:00.417 18:44:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:30:00.417 18:44:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:30:00.417 18:44:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=93.8 00:30:00.417 18:44:45 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=93 00:30:00.417 18:44:45 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:30:00.417 18:44:45 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 93 -lt 70 ]] 00:30:00.417 18:44:45 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:30:00.417 18:44:45 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:30:00.417 18:44:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:30:00.417 18:44:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 2962179 2 00:30:00.417 18:44:45 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 2962179 2 busy 00:30:00.417 18:44:45 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2962179 00:30:00.417 18:44:45 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:30:00.417 18:44:45 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:30:00.417 18:44:45 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:30:00.417 18:44:45 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:30:00.417 18:44:45 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:30:00.417 18:44:45 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:30:00.417 18:44:45 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2962179 -w 256 00:30:00.417 18:44:45 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:30:00.675 18:44:46 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2962183 root 20 0 128.2g 37312 23936 R 99.9 0.0 0:00.35 reactor_2' 00:30:00.675 18:44:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2962183 root 20 0 128.2g 37312 23936 R 99.9 0.0 0:00.35 reactor_2 00:30:00.675 18:44:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:30:00.675 18:44:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:30:00.675 18:44:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:30:00.675 18:44:46 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:30:00.675 18:44:46 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:30:00.675 18:44:46 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:30:00.675 18:44:46 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:30:00.675 18:44:46 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:30:00.675 18:44:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:30:00.934 [2024-07-15 18:44:46.251750] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:30:00.934 [2024-07-15 18:44:46.251873] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:30:00.934 18:44:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:30:00.934 18:44:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 2962179 2 00:30:00.934 18:44:46 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2962179 2 idle 00:30:00.934 18:44:46 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2962179 00:30:00.934 18:44:46 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:30:00.934 18:44:46 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:00.934 18:44:46 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:30:00.934 18:44:46 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:30:00.934 18:44:46 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:30:00.934 18:44:46 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:30:00.934 18:44:46 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:30:00.934 18:44:46 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2962179 -w 256 00:30:00.934 18:44:46 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:30:00.934 18:44:46 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2962183 root 20 0 128.2g 37312 23936 S 0.0 0.0 0:00.59 reactor_2' 00:30:00.934 18:44:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2962183 root 20 0 128.2g 37312 23936 S 0.0 0.0 0:00.59 reactor_2 00:30:00.934 18:44:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:30:00.934 18:44:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:30:00.934 18:44:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:30:00.934 18:44:46 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:30:00.934 18:44:46 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:30:00.934 18:44:46 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:30:00.934 18:44:46 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:30:00.934 18:44:46 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:30:00.934 18:44:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:30:01.193 [2024-07-15 18:44:46.688878] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:30:01.193 [2024-07-15 18:44:46.689081] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:30:01.193 [2024-07-15 18:44:46.689104] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:30:01.193 18:44:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:30:01.193 18:44:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 2962179 0 00:30:01.193 18:44:46 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2962179 0 idle 00:30:01.193 18:44:46 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2962179 00:30:01.193 18:44:46 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:01.193 18:44:46 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:01.193 18:44:46 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:30:01.193 18:44:46 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:30:01.193 18:44:46 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:30:01.193 18:44:46 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:30:01.193 18:44:46 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:30:01.193 18:44:46 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2962179 -w 256 00:30:01.193 18:44:46 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:30:01.450 18:44:46 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2962179 root 20 0 128.2g 37312 23936 S 0.0 0.0 0:01.62 reactor_0' 00:30:01.450 18:44:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2962179 root 20 0 128.2g 37312 23936 S 0.0 0.0 0:01.62 reactor_0 00:30:01.450 18:44:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:30:01.450 18:44:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:30:01.450 18:44:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:30:01.450 18:44:46 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:30:01.450 18:44:46 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:30:01.450 18:44:46 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:30:01.450 18:44:46 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:30:01.450 18:44:46 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:30:01.450 18:44:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:30:01.450 18:44:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:30:01.450 18:44:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:30:01.450 18:44:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 2962179 00:30:01.450 18:44:46 reactor_set_interrupt -- common/autotest_common.sh@948 -- # '[' -z 2962179 ']' 00:30:01.450 18:44:46 reactor_set_interrupt -- common/autotest_common.sh@952 -- # kill -0 2962179 00:30:01.450 18:44:46 reactor_set_interrupt -- common/autotest_common.sh@953 -- # uname 00:30:01.450 18:44:46 reactor_set_interrupt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:01.450 18:44:46 reactor_set_interrupt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2962179 00:30:01.450 18:44:46 reactor_set_interrupt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:01.450 18:44:46 reactor_set_interrupt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:01.450 18:44:46 reactor_set_interrupt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2962179' 00:30:01.450 killing process with pid 2962179 00:30:01.450 18:44:46 reactor_set_interrupt -- common/autotest_common.sh@967 -- # kill 2962179 00:30:01.451 18:44:46 reactor_set_interrupt -- common/autotest_common.sh@972 -- # wait 2962179 00:30:01.709 18:44:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:30:01.709 18:44:47 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:30:01.709 00:30:01.709 real 0m8.823s 00:30:01.709 user 0m9.562s 00:30:01.709 sys 0m1.613s 00:30:01.709 18:44:47 reactor_set_interrupt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:01.709 18:44:47 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:01.709 ************************************ 00:30:01.709 END TEST reactor_set_interrupt 00:30:01.709 ************************************ 00:30:01.709 18:44:47 -- common/autotest_common.sh@1142 -- # return 0 00:30:01.709 18:44:47 -- spdk/autotest.sh@194 -- # run_test reap_unregistered_poller /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:30:01.709 18:44:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:01.709 18:44:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:01.709 18:44:47 -- common/autotest_common.sh@10 -- # set +x 00:30:01.709 ************************************ 00:30:01.709 START TEST reap_unregistered_poller 00:30:01.709 ************************************ 00:30:01.709 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:30:01.970 * Looking for test storage... 00:30:01.970 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:30:01.970 18:44:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:30:01.971 18:44:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:30:01.971 18:44:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:30:01.971 18:44:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:30:01.971 18:44:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:30:01.971 18:44:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:01.971 18:44:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:30:01.971 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:30:01.971 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e 00:30:01.971 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:30:01.971 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob 00:30:01.971 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:30:01.971 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:30:01.971 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:30:01.971 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_CET=n 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_FC=n 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:30:01.971 18:44:47 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_URING=n 00:30:01.971 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:30:01.971 18:44:47 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:30:01.971 18:44:47 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:30:01.971 18:44:47 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:30:01.971 18:44:47 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:01.971 18:44:47 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:30:01.971 18:44:47 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:30:01.971 18:44:47 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:30:01.971 18:44:47 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:30:01.971 18:44:47 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:30:01.971 18:44:47 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:30:01.971 18:44:47 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:30:01.971 18:44:47 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:30:01.971 18:44:47 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:30:01.971 18:44:47 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:30:01.971 18:44:47 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:30:01.971 #define SPDK_CONFIG_H 00:30:01.971 #define SPDK_CONFIG_APPS 1 00:30:01.971 #define SPDK_CONFIG_ARCH native 00:30:01.971 #undef SPDK_CONFIG_ASAN 00:30:01.971 #undef SPDK_CONFIG_AVAHI 00:30:01.971 #undef SPDK_CONFIG_CET 00:30:01.971 #define SPDK_CONFIG_COVERAGE 1 00:30:01.971 #define SPDK_CONFIG_CROSS_PREFIX 00:30:01.971 #define SPDK_CONFIG_CRYPTO 1 00:30:01.971 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:30:01.972 #undef SPDK_CONFIG_CUSTOMOCF 00:30:01.972 #undef SPDK_CONFIG_DAOS 00:30:01.972 #define SPDK_CONFIG_DAOS_DIR 00:30:01.972 #define SPDK_CONFIG_DEBUG 1 00:30:01.972 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:30:01.972 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:30:01.972 #define SPDK_CONFIG_DPDK_INC_DIR 00:30:01.972 #define SPDK_CONFIG_DPDK_LIB_DIR 00:30:01.972 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:30:01.972 #undef SPDK_CONFIG_DPDK_UADK 00:30:01.972 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:30:01.972 #define SPDK_CONFIG_EXAMPLES 1 00:30:01.972 #undef SPDK_CONFIG_FC 00:30:01.972 #define SPDK_CONFIG_FC_PATH 00:30:01.972 #define SPDK_CONFIG_FIO_PLUGIN 1 00:30:01.972 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:30:01.972 #undef SPDK_CONFIG_FUSE 00:30:01.972 #undef SPDK_CONFIG_FUZZER 00:30:01.972 #define SPDK_CONFIG_FUZZER_LIB 00:30:01.972 #undef SPDK_CONFIG_GOLANG 00:30:01.972 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:30:01.972 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:30:01.972 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:30:01.972 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:30:01.972 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:30:01.972 #undef SPDK_CONFIG_HAVE_LIBBSD 00:30:01.972 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:30:01.972 #define SPDK_CONFIG_IDXD 1 00:30:01.972 #define SPDK_CONFIG_IDXD_KERNEL 1 00:30:01.972 #define SPDK_CONFIG_IPSEC_MB 1 00:30:01.972 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:30:01.972 #define SPDK_CONFIG_ISAL 1 00:30:01.972 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:30:01.972 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:30:01.972 #define SPDK_CONFIG_LIBDIR 00:30:01.972 #undef SPDK_CONFIG_LTO 00:30:01.972 #define SPDK_CONFIG_MAX_LCORES 128 00:30:01.972 #define SPDK_CONFIG_NVME_CUSE 1 00:30:01.972 #undef SPDK_CONFIG_OCF 00:30:01.972 #define SPDK_CONFIG_OCF_PATH 00:30:01.972 #define SPDK_CONFIG_OPENSSL_PATH 00:30:01.972 #undef SPDK_CONFIG_PGO_CAPTURE 00:30:01.972 #define SPDK_CONFIG_PGO_DIR 00:30:01.972 #undef SPDK_CONFIG_PGO_USE 00:30:01.972 #define SPDK_CONFIG_PREFIX /usr/local 00:30:01.972 #undef SPDK_CONFIG_RAID5F 00:30:01.972 #undef SPDK_CONFIG_RBD 00:30:01.972 #define SPDK_CONFIG_RDMA 1 00:30:01.972 #define SPDK_CONFIG_RDMA_PROV verbs 00:30:01.972 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:30:01.972 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:30:01.972 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:30:01.972 #define SPDK_CONFIG_SHARED 1 00:30:01.972 #undef SPDK_CONFIG_SMA 00:30:01.972 #define SPDK_CONFIG_TESTS 1 00:30:01.972 #undef SPDK_CONFIG_TSAN 00:30:01.972 #define SPDK_CONFIG_UBLK 1 00:30:01.972 #define SPDK_CONFIG_UBSAN 1 00:30:01.972 #undef SPDK_CONFIG_UNIT_TESTS 00:30:01.972 #undef SPDK_CONFIG_URING 00:30:01.972 #define SPDK_CONFIG_URING_PATH 00:30:01.972 #undef SPDK_CONFIG_URING_ZNS 00:30:01.972 #undef SPDK_CONFIG_USDT 00:30:01.972 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:30:01.972 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:30:01.972 #undef SPDK_CONFIG_VFIO_USER 00:30:01.972 #define SPDK_CONFIG_VFIO_USER_DIR 00:30:01.972 #define SPDK_CONFIG_VHOST 1 00:30:01.972 #define SPDK_CONFIG_VIRTIO 1 00:30:01.972 #undef SPDK_CONFIG_VTUNE 00:30:01.972 #define SPDK_CONFIG_VTUNE_DIR 00:30:01.972 #define SPDK_CONFIG_WERROR 1 00:30:01.972 #define SPDK_CONFIG_WPDK_DIR 00:30:01.972 #undef SPDK_CONFIG_XNVME 00:30:01.972 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:30:01.972 18:44:47 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:30:01.972 18:44:47 reap_unregistered_poller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:01.972 18:44:47 reap_unregistered_poller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:01.972 18:44:47 reap_unregistered_poller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:01.972 18:44:47 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.972 18:44:47 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.972 18:44:47 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.972 18:44:47 reap_unregistered_poller -- paths/export.sh@5 -- # export PATH 00:30:01.972 18:44:47 reap_unregistered_poller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:30:01.972 18:44:47 reap_unregistered_poller -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:30:01.972 18:44:47 reap_unregistered_poller -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:30:01.972 18:44:47 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:30:01.972 18:44:47 reap_unregistered_poller -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:30:01.972 18:44:47 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:01.972 18:44:47 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A 00:30:01.972 18:44:47 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:30:01.972 18:44:47 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:30:01.972 18:44:47 reap_unregistered_poller -- pm/common@68 -- # uname -s 00:30:01.972 18:44:47 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux 00:30:01.972 18:44:47 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:30:01.972 18:44:47 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:30:01.972 18:44:47 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:30:01.972 18:44:47 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:30:01.972 18:44:47 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:30:01.972 18:44:47 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:30:01.972 18:44:47 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]= 00:30:01.972 18:44:47 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E' 00:30:01.972 18:44:47 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:30:01.972 18:44:47 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:30:01.972 18:44:47 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]] 00:30:01.972 18:44:47 reap_unregistered_poller -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:30:01.972 18:44:47 reap_unregistered_poller -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:30:01.972 18:44:47 reap_unregistered_poller -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:30:01.972 18:44:47 reap_unregistered_poller -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:30:01.972 18:44:47 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@58 -- # : 0 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@62 -- # : 0 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@64 -- # : 0 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@66 -- # : 1 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@68 -- # : 0 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@70 -- # : 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@72 -- # : 0 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@74 -- # : 1 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@76 -- # : 0 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@78 -- # : 0 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@80 -- # : 0 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@82 -- # : 0 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@84 -- # : 0 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@86 -- # : 0 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@88 -- # : 0 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@90 -- # : 0 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@92 -- # : 0 00:30:01.972 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@94 -- # : 0 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@96 -- # : 0 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@98 -- # : 0 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@100 -- # : 0 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@102 -- # : rdma 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@104 -- # : 0 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@106 -- # : 0 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@108 -- # : 1 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@110 -- # : 0 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@112 -- # : 0 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@114 -- # : 0 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@116 -- # : 0 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@118 -- # : 1 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@120 -- # : 0 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@122 -- # : 1 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@124 -- # : 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@126 -- # : 0 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@128 -- # : 1 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@130 -- # : 0 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@132 -- # : 0 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@134 -- # : 0 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@136 -- # : 0 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@138 -- # : 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@140 -- # : true 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@142 -- # : 0 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@144 -- # : 0 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@146 -- # : 0 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@148 -- # : 0 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@150 -- # : 0 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@152 -- # : 0 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@154 -- # : 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@156 -- # : 0 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@158 -- # : 0 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@160 -- # : 0 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@162 -- # : 0 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@164 -- # : 0 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@167 -- # : 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@169 -- # : 0 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@171 -- # : 0 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@200 -- # cat 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:30:01.973 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@263 -- # export valgrind= 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@263 -- # valgrind= 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@269 -- # uname -s 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@272 -- # [[ 1 -eq 1 ]] 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@276 -- # export HUGE_EVEN_ALLOC=yes 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@276 -- # HUGE_EVEN_ALLOC=yes 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@279 -- # MAKE=make 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j88 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@299 -- # TEST_MODE= 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@318 -- # [[ -z 2962913 ]] 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@318 -- # kill -0 2962913 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@331 -- # local mount target_dir 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.5DTKXb 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.5DTKXb/tests/interrupt /tmp/spdk.5DTKXb 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@327 -- # df -T 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=954421248 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4330008576 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=88504627200 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=94507933696 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=6003306496 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=47250591744 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=47253966848 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=18891902976 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=18901590016 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=9687040 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=47253368832 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=47253966848 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=598016 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=9450786816 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=9450790912 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:30:01.974 * Looking for test storage... 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@368 -- # local target_space new_size 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@372 -- # mount=/ 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@374 -- # target_space=88504627200 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@381 -- # new_size=8217899008 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:30:01.974 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@389 -- # return 0 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # set -o errtrace 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@1687 -- # true 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@1689 -- # xtrace_fd 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec 00:30:01.974 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec 00:30:01.975 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore 00:30:01.975 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:30:01.975 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:30:01.975 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x 00:30:01.975 18:44:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:30:01.975 18:44:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:01.975 18:44:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:30:01.975 18:44:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:30:01.975 18:44:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:30:01.975 18:44:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:30:01.975 18:44:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:30:01.975 18:44:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:30:01.975 18:44:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:30:01.975 18:44:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:30:01.975 18:44:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:01.975 18:44:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:30:01.975 18:44:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=2962954 00:30:01.975 18:44:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:01.975 18:44:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:30:01.975 18:44:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 2962954 /var/tmp/spdk.sock 00:30:01.975 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@829 -- # '[' -z 2962954 ']' 00:30:01.975 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:01.975 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:01.975 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:01.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:01.975 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:01.975 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:30:01.975 [2024-07-15 18:44:47.468182] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:30:01.975 [2024-07-15 18:44:47.468243] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2962954 ] 00:30:02.234 [2024-07-15 18:44:47.567222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:02.234 [2024-07-15 18:44:47.659117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:02.234 [2024-07-15 18:44:47.659220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:02.234 [2024-07-15 18:44:47.659222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.234 [2024-07-15 18:44:47.730179] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:02.234 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:02.234 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@862 -- # return 0 00:30:02.234 18:44:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:30:02.234 18:44:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:30:02.234 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.234 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:30:02.234 18:44:47 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:02.492 18:44:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:30:02.492 "name": "app_thread", 00:30:02.492 "id": 1, 00:30:02.492 "active_pollers": [], 00:30:02.492 "timed_pollers": [ 00:30:02.492 { 00:30:02.492 "name": "rpc_subsystem_poll_servers", 00:30:02.492 "id": 1, 00:30:02.492 "state": "waiting", 00:30:02.492 "run_count": 0, 00:30:02.492 "busy_count": 0, 00:30:02.492 "period_ticks": 8400000 00:30:02.492 } 00:30:02.492 ], 00:30:02.492 "paused_pollers": [] 00:30:02.492 }' 00:30:02.492 18:44:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:30:02.492 18:44:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:30:02.492 18:44:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:30:02.492 18:44:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:30:02.492 18:44:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:30:02.492 18:44:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:30:02.492 18:44:47 reap_unregistered_poller -- interrupt/common.sh@75 -- # uname -s 00:30:02.492 18:44:47 reap_unregistered_poller -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:30:02.493 18:44:47 reap_unregistered_poller -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:30:02.493 5000+0 records in 00:30:02.493 5000+0 records out 00:30:02.493 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0174203 s, 588 MB/s 00:30:02.493 18:44:47 reap_unregistered_poller -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:30:02.751 AIO0 00:30:02.751 18:44:48 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:03.009 18:44:48 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:30:03.268 18:44:48 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:30:03.268 18:44:48 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:30:03.268 18:44:48 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.268 18:44:48 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:30:03.268 18:44:48 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.268 18:44:48 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:30:03.268 "name": "app_thread", 00:30:03.268 "id": 1, 00:30:03.268 "active_pollers": [], 00:30:03.268 "timed_pollers": [ 00:30:03.268 { 00:30:03.268 "name": "rpc_subsystem_poll_servers", 00:30:03.268 "id": 1, 00:30:03.268 "state": "waiting", 00:30:03.268 "run_count": 0, 00:30:03.268 "busy_count": 0, 00:30:03.268 "period_ticks": 8400000 00:30:03.268 } 00:30:03.268 ], 00:30:03.268 "paused_pollers": [] 00:30:03.268 }' 00:30:03.268 18:44:48 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:30:03.268 18:44:48 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:30:03.268 18:44:48 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:30:03.268 18:44:48 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:30:03.268 18:44:48 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:30:03.268 18:44:48 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:30:03.268 18:44:48 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:30:03.268 18:44:48 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 2962954 00:30:03.268 18:44:48 reap_unregistered_poller -- common/autotest_common.sh@948 -- # '[' -z 2962954 ']' 00:30:03.268 18:44:48 reap_unregistered_poller -- common/autotest_common.sh@952 -- # kill -0 2962954 00:30:03.268 18:44:48 reap_unregistered_poller -- common/autotest_common.sh@953 -- # uname 00:30:03.268 18:44:48 reap_unregistered_poller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:03.268 18:44:48 reap_unregistered_poller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2962954 00:30:03.268 18:44:48 reap_unregistered_poller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:03.268 18:44:48 reap_unregistered_poller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:03.268 18:44:48 reap_unregistered_poller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2962954' 00:30:03.268 killing process with pid 2962954 00:30:03.268 18:44:48 reap_unregistered_poller -- common/autotest_common.sh@967 -- # kill 2962954 00:30:03.268 18:44:48 reap_unregistered_poller -- common/autotest_common.sh@972 -- # wait 2962954 00:30:03.527 18:44:48 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:30:03.527 18:44:48 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:30:03.527 00:30:03.527 real 0m1.751s 00:30:03.527 user 0m1.394s 00:30:03.527 sys 0m0.497s 00:30:03.527 18:44:48 reap_unregistered_poller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:03.527 18:44:48 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:30:03.527 ************************************ 00:30:03.527 END TEST reap_unregistered_poller 00:30:03.527 ************************************ 00:30:03.527 18:44:48 -- common/autotest_common.sh@1142 -- # return 0 00:30:03.527 18:44:48 -- spdk/autotest.sh@198 -- # uname -s 00:30:03.527 18:44:48 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:30:03.527 18:44:48 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:30:03.527 18:44:48 -- spdk/autotest.sh@205 -- # [[ 1 -eq 0 ]] 00:30:03.527 18:44:48 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:30:03.527 18:44:48 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:30:03.527 18:44:48 -- spdk/autotest.sh@260 -- # timing_exit lib 00:30:03.527 18:44:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:03.527 18:44:48 -- common/autotest_common.sh@10 -- # set +x 00:30:03.527 18:44:49 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:30:03.527 18:44:49 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:30:03.527 18:44:49 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:30:03.527 18:44:49 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:30:03.527 18:44:49 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:30:03.527 18:44:49 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:30:03.527 18:44:49 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:30:03.527 18:44:49 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:30:03.527 18:44:49 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:30:03.527 18:44:49 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:30:03.527 18:44:49 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:30:03.527 18:44:49 -- spdk/autotest.sh@347 -- # '[' 1 -eq 1 ']' 00:30:03.527 18:44:49 -- spdk/autotest.sh@348 -- # run_test compress_compdev /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:30:03.527 18:44:49 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:03.527 18:44:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:03.527 18:44:49 -- common/autotest_common.sh@10 -- # set +x 00:30:03.527 ************************************ 00:30:03.527 START TEST compress_compdev 00:30:03.527 ************************************ 00:30:03.527 18:44:49 compress_compdev -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:30:03.786 * Looking for test storage... 00:30:03.786 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:30:03.786 18:44:49 compress_compdev -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:30:03.786 18:44:49 compress_compdev -- nvmf/common.sh@7 -- # uname -s 00:30:03.786 18:44:49 compress_compdev -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:03.786 18:44:49 compress_compdev -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:03.786 18:44:49 compress_compdev -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:03.786 18:44:49 compress_compdev -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:03.786 18:44:49 compress_compdev -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:03.786 18:44:49 compress_compdev -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:03.786 18:44:49 compress_compdev -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:03.786 18:44:49 compress_compdev -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:03.786 18:44:49 compress_compdev -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:03.786 18:44:49 compress_compdev -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:03.786 18:44:49 compress_compdev -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b98b40-9a1d-eb11-906e-0017a4403562 00:30:03.786 18:44:49 compress_compdev -- nvmf/common.sh@18 -- # NVME_HOSTID=80b98b40-9a1d-eb11-906e-0017a4403562 00:30:03.786 18:44:49 compress_compdev -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:03.786 18:44:49 compress_compdev -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:03.786 18:44:49 compress_compdev -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:30:03.786 18:44:49 compress_compdev -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:03.786 18:44:49 compress_compdev -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:30:03.786 18:44:49 compress_compdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:03.786 18:44:49 compress_compdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:03.786 18:44:49 compress_compdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:03.786 18:44:49 compress_compdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.786 18:44:49 compress_compdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.786 18:44:49 compress_compdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.786 18:44:49 compress_compdev -- paths/export.sh@5 -- # export PATH 00:30:03.786 18:44:49 compress_compdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.786 18:44:49 compress_compdev -- nvmf/common.sh@47 -- # : 0 00:30:03.786 18:44:49 compress_compdev -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:03.786 18:44:49 compress_compdev -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:03.786 18:44:49 compress_compdev -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:03.786 18:44:49 compress_compdev -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:03.786 18:44:49 compress_compdev -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:03.786 18:44:49 compress_compdev -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:03.786 18:44:49 compress_compdev -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:03.786 18:44:49 compress_compdev -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:03.786 18:44:49 compress_compdev -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:03.786 18:44:49 compress_compdev -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:30:03.786 18:44:49 compress_compdev -- compress/compress.sh@82 -- # test_type=compdev 00:30:03.786 18:44:49 compress_compdev -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:30:03.786 18:44:49 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:30:03.786 18:44:49 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=2963264 00:30:03.786 18:44:49 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:03.786 18:44:49 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 2963264 00:30:03.786 18:44:49 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:30:03.786 18:44:49 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 2963264 ']' 00:30:03.786 18:44:49 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:03.786 18:44:49 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:03.786 18:44:49 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:03.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:03.786 18:44:49 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:03.786 18:44:49 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:30:03.786 [2024-07-15 18:44:49.209521] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:30:03.786 [2024-07-15 18:44:49.209584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2963264 ] 00:30:03.786 [2024-07-15 18:44:49.313161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:04.045 [2024-07-15 18:44:49.426599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:04.045 [2024-07-15 18:44:49.426605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.610 [2024-07-15 18:44:50.128841] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:30:04.867 18:44:50 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:04.867 18:44:50 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:30:04.867 18:44:50 compress_compdev -- compress/compress.sh@74 -- # create_vols 00:30:04.867 18:44:50 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:04.867 18:44:50 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:08.179 [2024-07-15 18:44:53.312892] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1dcfab0 PMD being used: compress_qat 00:30:08.179 18:44:53 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:30:08.179 18:44:53 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:30:08.179 18:44:53 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:08.179 18:44:53 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:30:08.179 18:44:53 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:08.179 18:44:53 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:08.179 18:44:53 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:08.179 18:44:53 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:30:08.438 [ 00:30:08.438 { 00:30:08.438 "name": "Nvme0n1", 00:30:08.438 "aliases": [ 00:30:08.438 "b46e1730-1eef-45f7-9108-f2190209c1e0" 00:30:08.438 ], 00:30:08.438 "product_name": "NVMe disk", 00:30:08.438 "block_size": 512, 00:30:08.438 "num_blocks": 1953525168, 00:30:08.438 "uuid": "b46e1730-1eef-45f7-9108-f2190209c1e0", 00:30:08.438 "assigned_rate_limits": { 00:30:08.438 "rw_ios_per_sec": 0, 00:30:08.438 "rw_mbytes_per_sec": 0, 00:30:08.438 "r_mbytes_per_sec": 0, 00:30:08.438 "w_mbytes_per_sec": 0 00:30:08.438 }, 00:30:08.438 "claimed": false, 00:30:08.438 "zoned": false, 00:30:08.438 "supported_io_types": { 00:30:08.438 "read": true, 00:30:08.438 "write": true, 00:30:08.438 "unmap": true, 00:30:08.438 "flush": true, 00:30:08.438 "reset": true, 00:30:08.438 "nvme_admin": true, 00:30:08.438 "nvme_io": true, 00:30:08.438 "nvme_io_md": false, 00:30:08.438 "write_zeroes": true, 00:30:08.438 "zcopy": false, 00:30:08.438 "get_zone_info": false, 00:30:08.438 "zone_management": false, 00:30:08.438 "zone_append": false, 00:30:08.438 "compare": false, 00:30:08.438 "compare_and_write": false, 00:30:08.438 "abort": true, 00:30:08.438 "seek_hole": false, 00:30:08.438 "seek_data": false, 00:30:08.438 "copy": false, 00:30:08.438 "nvme_iov_md": false 00:30:08.438 }, 00:30:08.438 "driver_specific": { 00:30:08.438 "nvme": [ 00:30:08.438 { 00:30:08.438 "pci_address": "0000:5e:00.0", 00:30:08.438 "trid": { 00:30:08.438 "trtype": "PCIe", 00:30:08.438 "traddr": "0000:5e:00.0" 00:30:08.438 }, 00:30:08.438 "ctrlr_data": { 00:30:08.438 "cntlid": 0, 00:30:08.438 "vendor_id": "0x8086", 00:30:08.438 "model_number": "INTEL SSDPE2KX010T8", 00:30:08.438 "serial_number": "BTLJ913602XE1P0FGN", 00:30:08.438 "firmware_revision": "VDV10184", 00:30:08.438 "oacs": { 00:30:08.438 "security": 0, 00:30:08.438 "format": 1, 00:30:08.438 "firmware": 1, 00:30:08.438 "ns_manage": 1 00:30:08.438 }, 00:30:08.438 "multi_ctrlr": false, 00:30:08.438 "ana_reporting": false 00:30:08.438 }, 00:30:08.438 "vs": { 00:30:08.438 "nvme_version": "1.2" 00:30:08.438 }, 00:30:08.438 "ns_data": { 00:30:08.438 "id": 1, 00:30:08.438 "can_share": false 00:30:08.438 } 00:30:08.438 } 00:30:08.438 ], 00:30:08.438 "mp_policy": "active_passive" 00:30:08.438 } 00:30:08.438 } 00:30:08.438 ] 00:30:08.438 18:44:53 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:30:08.438 18:44:53 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:30:08.697 [2024-07-15 18:44:54.091528] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1c1df40 PMD being used: compress_qat 00:30:09.631 f34ba0b0-7569-4f98-bb3a-55c93cee5223 00:30:09.631 18:44:54 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:30:09.631 e8fe7134-a36a-49b8-a843-2c3d0e03fbd1 00:30:09.631 18:44:55 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:30:09.631 18:44:55 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:30:09.631 18:44:55 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:09.631 18:44:55 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:30:09.631 18:44:55 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:09.631 18:44:55 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:09.631 18:44:55 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:09.889 18:44:55 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:30:10.147 [ 00:30:10.147 { 00:30:10.147 "name": "e8fe7134-a36a-49b8-a843-2c3d0e03fbd1", 00:30:10.147 "aliases": [ 00:30:10.147 "lvs0/lv0" 00:30:10.147 ], 00:30:10.147 "product_name": "Logical Volume", 00:30:10.147 "block_size": 512, 00:30:10.148 "num_blocks": 204800, 00:30:10.148 "uuid": "e8fe7134-a36a-49b8-a843-2c3d0e03fbd1", 00:30:10.148 "assigned_rate_limits": { 00:30:10.148 "rw_ios_per_sec": 0, 00:30:10.148 "rw_mbytes_per_sec": 0, 00:30:10.148 "r_mbytes_per_sec": 0, 00:30:10.148 "w_mbytes_per_sec": 0 00:30:10.148 }, 00:30:10.148 "claimed": false, 00:30:10.148 "zoned": false, 00:30:10.148 "supported_io_types": { 00:30:10.148 "read": true, 00:30:10.148 "write": true, 00:30:10.148 "unmap": true, 00:30:10.148 "flush": false, 00:30:10.148 "reset": true, 00:30:10.148 "nvme_admin": false, 00:30:10.148 "nvme_io": false, 00:30:10.148 "nvme_io_md": false, 00:30:10.148 "write_zeroes": true, 00:30:10.148 "zcopy": false, 00:30:10.148 "get_zone_info": false, 00:30:10.148 "zone_management": false, 00:30:10.148 "zone_append": false, 00:30:10.148 "compare": false, 00:30:10.148 "compare_and_write": false, 00:30:10.148 "abort": false, 00:30:10.148 "seek_hole": true, 00:30:10.148 "seek_data": true, 00:30:10.148 "copy": false, 00:30:10.148 "nvme_iov_md": false 00:30:10.148 }, 00:30:10.148 "driver_specific": { 00:30:10.148 "lvol": { 00:30:10.148 "lvol_store_uuid": "f34ba0b0-7569-4f98-bb3a-55c93cee5223", 00:30:10.148 "base_bdev": "Nvme0n1", 00:30:10.148 "thin_provision": true, 00:30:10.148 "num_allocated_clusters": 0, 00:30:10.148 "snapshot": false, 00:30:10.148 "clone": false, 00:30:10.148 "esnap_clone": false 00:30:10.148 } 00:30:10.148 } 00:30:10.148 } 00:30:10.148 ] 00:30:10.148 18:44:55 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:30:10.148 18:44:55 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:30:10.148 18:44:55 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:30:10.405 [2024-07-15 18:44:55.778557] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:30:10.405 COMP_lvs0/lv0 00:30:10.405 18:44:55 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:30:10.405 18:44:55 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:30:10.405 18:44:55 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:10.405 18:44:55 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:30:10.405 18:44:55 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:10.405 18:44:55 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:10.405 18:44:55 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:10.663 18:44:56 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:30:10.922 [ 00:30:10.922 { 00:30:10.922 "name": "COMP_lvs0/lv0", 00:30:10.922 "aliases": [ 00:30:10.922 "b7a83303-21bd-5cee-8e9e-19544b896727" 00:30:10.922 ], 00:30:10.922 "product_name": "compress", 00:30:10.922 "block_size": 512, 00:30:10.922 "num_blocks": 200704, 00:30:10.922 "uuid": "b7a83303-21bd-5cee-8e9e-19544b896727", 00:30:10.922 "assigned_rate_limits": { 00:30:10.922 "rw_ios_per_sec": 0, 00:30:10.922 "rw_mbytes_per_sec": 0, 00:30:10.922 "r_mbytes_per_sec": 0, 00:30:10.922 "w_mbytes_per_sec": 0 00:30:10.922 }, 00:30:10.922 "claimed": false, 00:30:10.922 "zoned": false, 00:30:10.922 "supported_io_types": { 00:30:10.922 "read": true, 00:30:10.922 "write": true, 00:30:10.922 "unmap": false, 00:30:10.922 "flush": false, 00:30:10.922 "reset": false, 00:30:10.922 "nvme_admin": false, 00:30:10.922 "nvme_io": false, 00:30:10.922 "nvme_io_md": false, 00:30:10.922 "write_zeroes": true, 00:30:10.922 "zcopy": false, 00:30:10.922 "get_zone_info": false, 00:30:10.922 "zone_management": false, 00:30:10.922 "zone_append": false, 00:30:10.922 "compare": false, 00:30:10.922 "compare_and_write": false, 00:30:10.922 "abort": false, 00:30:10.922 "seek_hole": false, 00:30:10.922 "seek_data": false, 00:30:10.922 "copy": false, 00:30:10.922 "nvme_iov_md": false 00:30:10.922 }, 00:30:10.922 "driver_specific": { 00:30:10.922 "compress": { 00:30:10.922 "name": "COMP_lvs0/lv0", 00:30:10.922 "base_bdev_name": "e8fe7134-a36a-49b8-a843-2c3d0e03fbd1" 00:30:10.922 } 00:30:10.922 } 00:30:10.922 } 00:30:10.922 ] 00:30:10.922 18:44:56 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:30:10.922 18:44:56 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:10.922 [2024-07-15 18:44:56.408148] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f8bbc1b15c0 PMD being used: compress_qat 00:30:10.922 [2024-07-15 18:44:56.411107] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1c05f50 PMD being used: compress_qat 00:30:10.922 Running I/O for 3 seconds... 00:30:14.204 00:30:14.204 Latency(us) 00:30:14.204 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.205 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:30:14.205 Verification LBA range: start 0x0 length 0x3100 00:30:14.205 COMP_lvs0/lv0 : 3.01 1699.86 6.64 0.00 0.00 18723.05 232.11 23218.47 00:30:14.205 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:30:14.205 Verification LBA range: start 0x3100 length 0x3100 00:30:14.205 COMP_lvs0/lv0 : 3.01 1764.62 6.89 0.00 0.00 18012.99 197.00 23842.62 00:30:14.205 =================================================================================================================== 00:30:14.205 Total : 3464.48 13.53 0.00 0.00 18361.38 197.00 23842.62 00:30:14.205 0 00:30:14.205 18:44:59 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:30:14.205 18:44:59 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:30:14.205 18:44:59 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:30:14.462 18:44:59 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:30:14.462 18:44:59 compress_compdev -- compress/compress.sh@78 -- # killprocess 2963264 00:30:14.462 18:44:59 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 2963264 ']' 00:30:14.462 18:44:59 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 2963264 00:30:14.462 18:44:59 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:30:14.462 18:44:59 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:14.462 18:44:59 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2963264 00:30:14.719 18:45:00 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:14.719 18:45:00 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:14.719 18:45:00 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2963264' 00:30:14.719 killing process with pid 2963264 00:30:14.719 18:45:00 compress_compdev -- common/autotest_common.sh@967 -- # kill 2963264 00:30:14.719 Received shutdown signal, test time was about 3.000000 seconds 00:30:14.719 00:30:14.719 Latency(us) 00:30:14.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.719 =================================================================================================================== 00:30:14.719 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:14.719 18:45:00 compress_compdev -- common/autotest_common.sh@972 -- # wait 2963264 00:30:16.124 18:45:01 compress_compdev -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:30:16.124 18:45:01 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:30:16.124 18:45:01 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=2965295 00:30:16.124 18:45:01 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:16.124 18:45:01 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:30:16.124 18:45:01 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 2965295 00:30:16.124 18:45:01 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 2965295 ']' 00:30:16.124 18:45:01 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:16.124 18:45:01 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:16.124 18:45:01 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:16.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:16.124 18:45:01 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:16.124 18:45:01 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:30:16.381 [2024-07-15 18:45:01.696797] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:30:16.381 [2024-07-15 18:45:01.696858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2965295 ] 00:30:16.381 [2024-07-15 18:45:01.801348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:16.381 [2024-07-15 18:45:01.913688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:16.381 [2024-07-15 18:45:01.913694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:17.327 [2024-07-15 18:45:02.605274] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:30:17.327 18:45:02 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:17.327 18:45:02 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:30:17.327 18:45:02 compress_compdev -- compress/compress.sh@74 -- # create_vols 512 00:30:17.327 18:45:02 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:17.327 18:45:02 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:20.616 [2024-07-15 18:45:05.776753] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xda0ab0 PMD being used: compress_qat 00:30:20.616 18:45:05 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:30:20.616 18:45:05 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:30:20.616 18:45:05 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:20.616 18:45:05 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:30:20.617 18:45:05 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:20.617 18:45:05 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:20.617 18:45:05 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:20.617 18:45:06 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:30:20.875 [ 00:30:20.875 { 00:30:20.875 "name": "Nvme0n1", 00:30:20.875 "aliases": [ 00:30:20.875 "18887a8f-c83b-4bbf-9c99-607063f80a9d" 00:30:20.875 ], 00:30:20.875 "product_name": "NVMe disk", 00:30:20.875 "block_size": 512, 00:30:20.875 "num_blocks": 1953525168, 00:30:20.875 "uuid": "18887a8f-c83b-4bbf-9c99-607063f80a9d", 00:30:20.875 "assigned_rate_limits": { 00:30:20.875 "rw_ios_per_sec": 0, 00:30:20.875 "rw_mbytes_per_sec": 0, 00:30:20.875 "r_mbytes_per_sec": 0, 00:30:20.875 "w_mbytes_per_sec": 0 00:30:20.875 }, 00:30:20.875 "claimed": false, 00:30:20.875 "zoned": false, 00:30:20.875 "supported_io_types": { 00:30:20.875 "read": true, 00:30:20.875 "write": true, 00:30:20.875 "unmap": true, 00:30:20.875 "flush": true, 00:30:20.875 "reset": true, 00:30:20.875 "nvme_admin": true, 00:30:20.875 "nvme_io": true, 00:30:20.875 "nvme_io_md": false, 00:30:20.875 "write_zeroes": true, 00:30:20.875 "zcopy": false, 00:30:20.875 "get_zone_info": false, 00:30:20.875 "zone_management": false, 00:30:20.875 "zone_append": false, 00:30:20.875 "compare": false, 00:30:20.875 "compare_and_write": false, 00:30:20.875 "abort": true, 00:30:20.875 "seek_hole": false, 00:30:20.875 "seek_data": false, 00:30:20.875 "copy": false, 00:30:20.875 "nvme_iov_md": false 00:30:20.875 }, 00:30:20.875 "driver_specific": { 00:30:20.875 "nvme": [ 00:30:20.875 { 00:30:20.875 "pci_address": "0000:5e:00.0", 00:30:20.875 "trid": { 00:30:20.875 "trtype": "PCIe", 00:30:20.875 "traddr": "0000:5e:00.0" 00:30:20.875 }, 00:30:20.875 "ctrlr_data": { 00:30:20.875 "cntlid": 0, 00:30:20.875 "vendor_id": "0x8086", 00:30:20.875 "model_number": "INTEL SSDPE2KX010T8", 00:30:20.875 "serial_number": "BTLJ913602XE1P0FGN", 00:30:20.875 "firmware_revision": "VDV10184", 00:30:20.875 "oacs": { 00:30:20.875 "security": 0, 00:30:20.875 "format": 1, 00:30:20.875 "firmware": 1, 00:30:20.875 "ns_manage": 1 00:30:20.875 }, 00:30:20.875 "multi_ctrlr": false, 00:30:20.875 "ana_reporting": false 00:30:20.875 }, 00:30:20.875 "vs": { 00:30:20.875 "nvme_version": "1.2" 00:30:20.875 }, 00:30:20.875 "ns_data": { 00:30:20.875 "id": 1, 00:30:20.875 "can_share": false 00:30:20.875 } 00:30:20.875 } 00:30:20.875 ], 00:30:20.875 "mp_policy": "active_passive" 00:30:20.875 } 00:30:20.875 } 00:30:20.875 ] 00:30:20.875 18:45:06 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:30:20.875 18:45:06 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:30:21.134 [2024-07-15 18:45:06.567423] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xbef050 PMD being used: compress_qat 00:30:22.087 a588a17a-8cb5-4592-be5f-62261082923e 00:30:22.087 18:45:07 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:30:22.345 f4549209-469b-48ca-801b-1176eb98e54a 00:30:22.345 18:45:07 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:30:22.345 18:45:07 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:30:22.345 18:45:07 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:22.345 18:45:07 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:30:22.345 18:45:07 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:22.345 18:45:07 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:22.345 18:45:07 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:22.602 18:45:08 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:30:22.860 [ 00:30:22.860 { 00:30:22.860 "name": "f4549209-469b-48ca-801b-1176eb98e54a", 00:30:22.860 "aliases": [ 00:30:22.860 "lvs0/lv0" 00:30:22.860 ], 00:30:22.860 "product_name": "Logical Volume", 00:30:22.860 "block_size": 512, 00:30:22.860 "num_blocks": 204800, 00:30:22.860 "uuid": "f4549209-469b-48ca-801b-1176eb98e54a", 00:30:22.860 "assigned_rate_limits": { 00:30:22.860 "rw_ios_per_sec": 0, 00:30:22.860 "rw_mbytes_per_sec": 0, 00:30:22.860 "r_mbytes_per_sec": 0, 00:30:22.860 "w_mbytes_per_sec": 0 00:30:22.860 }, 00:30:22.860 "claimed": false, 00:30:22.860 "zoned": false, 00:30:22.860 "supported_io_types": { 00:30:22.860 "read": true, 00:30:22.860 "write": true, 00:30:22.860 "unmap": true, 00:30:22.860 "flush": false, 00:30:22.860 "reset": true, 00:30:22.860 "nvme_admin": false, 00:30:22.860 "nvme_io": false, 00:30:22.860 "nvme_io_md": false, 00:30:22.860 "write_zeroes": true, 00:30:22.860 "zcopy": false, 00:30:22.860 "get_zone_info": false, 00:30:22.860 "zone_management": false, 00:30:22.860 "zone_append": false, 00:30:22.860 "compare": false, 00:30:22.860 "compare_and_write": false, 00:30:22.860 "abort": false, 00:30:22.860 "seek_hole": true, 00:30:22.860 "seek_data": true, 00:30:22.860 "copy": false, 00:30:22.860 "nvme_iov_md": false 00:30:22.860 }, 00:30:22.860 "driver_specific": { 00:30:22.860 "lvol": { 00:30:22.860 "lvol_store_uuid": "a588a17a-8cb5-4592-be5f-62261082923e", 00:30:22.860 "base_bdev": "Nvme0n1", 00:30:22.860 "thin_provision": true, 00:30:22.860 "num_allocated_clusters": 0, 00:30:22.860 "snapshot": false, 00:30:22.860 "clone": false, 00:30:22.860 "esnap_clone": false 00:30:22.860 } 00:30:22.860 } 00:30:22.860 } 00:30:22.860 ] 00:30:22.860 18:45:08 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:30:22.860 18:45:08 compress_compdev -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:30:22.860 18:45:08 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:30:23.117 [2024-07-15 18:45:08.542592] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:30:23.117 COMP_lvs0/lv0 00:30:23.117 18:45:08 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:30:23.117 18:45:08 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:30:23.117 18:45:08 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:23.117 18:45:08 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:30:23.117 18:45:08 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:23.117 18:45:08 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:23.117 18:45:08 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:23.375 18:45:08 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:30:23.634 [ 00:30:23.634 { 00:30:23.634 "name": "COMP_lvs0/lv0", 00:30:23.634 "aliases": [ 00:30:23.634 "c9cb1da2-749e-501f-af6a-6a0134da55b8" 00:30:23.634 ], 00:30:23.634 "product_name": "compress", 00:30:23.634 "block_size": 512, 00:30:23.634 "num_blocks": 200704, 00:30:23.634 "uuid": "c9cb1da2-749e-501f-af6a-6a0134da55b8", 00:30:23.634 "assigned_rate_limits": { 00:30:23.634 "rw_ios_per_sec": 0, 00:30:23.634 "rw_mbytes_per_sec": 0, 00:30:23.634 "r_mbytes_per_sec": 0, 00:30:23.634 "w_mbytes_per_sec": 0 00:30:23.634 }, 00:30:23.634 "claimed": false, 00:30:23.634 "zoned": false, 00:30:23.634 "supported_io_types": { 00:30:23.634 "read": true, 00:30:23.634 "write": true, 00:30:23.634 "unmap": false, 00:30:23.634 "flush": false, 00:30:23.634 "reset": false, 00:30:23.634 "nvme_admin": false, 00:30:23.634 "nvme_io": false, 00:30:23.634 "nvme_io_md": false, 00:30:23.634 "write_zeroes": true, 00:30:23.634 "zcopy": false, 00:30:23.634 "get_zone_info": false, 00:30:23.634 "zone_management": false, 00:30:23.634 "zone_append": false, 00:30:23.634 "compare": false, 00:30:23.634 "compare_and_write": false, 00:30:23.634 "abort": false, 00:30:23.634 "seek_hole": false, 00:30:23.634 "seek_data": false, 00:30:23.634 "copy": false, 00:30:23.634 "nvme_iov_md": false 00:30:23.634 }, 00:30:23.634 "driver_specific": { 00:30:23.634 "compress": { 00:30:23.634 "name": "COMP_lvs0/lv0", 00:30:23.634 "base_bdev_name": "f4549209-469b-48ca-801b-1176eb98e54a" 00:30:23.634 } 00:30:23.634 } 00:30:23.634 } 00:30:23.634 ] 00:30:23.634 18:45:09 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:30:23.634 18:45:09 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:23.634 [2024-07-15 18:45:09.164138] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f37581b15c0 PMD being used: compress_qat 00:30:23.634 [2024-07-15 18:45:09.167124] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xc80bf0 PMD being used: compress_qat 00:30:23.634 Running I/O for 3 seconds... 00:30:26.917 00:30:26.917 Latency(us) 00:30:26.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:26.917 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:30:26.917 Verification LBA range: start 0x0 length 0x3100 00:30:26.917 COMP_lvs0/lv0 : 3.02 1704.01 6.66 0.00 0.00 18655.28 298.42 21221.18 00:30:26.917 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:30:26.917 Verification LBA range: start 0x3100 length 0x3100 00:30:26.917 COMP_lvs0/lv0 : 3.01 1773.96 6.93 0.00 0.00 17898.57 199.92 20971.52 00:30:26.917 =================================================================================================================== 00:30:26.917 Total : 3477.97 13.59 0.00 0.00 18269.49 199.92 21221.18 00:30:26.917 0 00:30:26.917 18:45:12 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:30:26.917 18:45:12 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:30:27.176 18:45:12 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:30:27.176 18:45:12 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:30:27.435 18:45:12 compress_compdev -- compress/compress.sh@78 -- # killprocess 2965295 00:30:27.435 18:45:12 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 2965295 ']' 00:30:27.435 18:45:12 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 2965295 00:30:27.435 18:45:12 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:30:27.435 18:45:12 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:27.435 18:45:12 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2965295 00:30:27.435 18:45:12 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:27.435 18:45:12 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:27.435 18:45:12 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2965295' 00:30:27.435 killing process with pid 2965295 00:30:27.435 18:45:12 compress_compdev -- common/autotest_common.sh@967 -- # kill 2965295 00:30:27.435 Received shutdown signal, test time was about 3.000000 seconds 00:30:27.435 00:30:27.435 Latency(us) 00:30:27.435 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.435 =================================================================================================================== 00:30:27.436 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:27.436 18:45:12 compress_compdev -- common/autotest_common.sh@972 -- # wait 2965295 00:30:29.337 18:45:14 compress_compdev -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:30:29.337 18:45:14 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:30:29.337 18:45:14 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=2967639 00:30:29.337 18:45:14 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:29.338 18:45:14 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 2967639 00:30:29.338 18:45:14 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:30:29.338 18:45:14 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 2967639 ']' 00:30:29.338 18:45:14 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:29.338 18:45:14 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:29.338 18:45:14 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:29.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:29.338 18:45:14 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:29.338 18:45:14 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:30:29.338 [2024-07-15 18:45:14.437554] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:30:29.338 [2024-07-15 18:45:14.437616] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2967639 ] 00:30:29.338 [2024-07-15 18:45:14.540677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:29.338 [2024-07-15 18:45:14.652827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:29.338 [2024-07-15 18:45:14.652833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:29.904 [2024-07-15 18:45:15.346518] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:30:29.904 18:45:15 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:29.904 18:45:15 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:30:29.904 18:45:15 compress_compdev -- compress/compress.sh@74 -- # create_vols 4096 00:30:29.904 18:45:15 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:29.904 18:45:15 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:33.224 [2024-07-15 18:45:18.520837] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xcc1ab0 PMD being used: compress_qat 00:30:33.224 18:45:18 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:30:33.224 18:45:18 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:30:33.224 18:45:18 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:33.224 18:45:18 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:30:33.224 18:45:18 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:33.224 18:45:18 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:33.224 18:45:18 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:33.224 18:45:18 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:30:33.481 [ 00:30:33.481 { 00:30:33.481 "name": "Nvme0n1", 00:30:33.481 "aliases": [ 00:30:33.481 "60f59964-80cc-4708-8e31-130380c8d3b9" 00:30:33.481 ], 00:30:33.481 "product_name": "NVMe disk", 00:30:33.481 "block_size": 512, 00:30:33.481 "num_blocks": 1953525168, 00:30:33.481 "uuid": "60f59964-80cc-4708-8e31-130380c8d3b9", 00:30:33.481 "assigned_rate_limits": { 00:30:33.481 "rw_ios_per_sec": 0, 00:30:33.481 "rw_mbytes_per_sec": 0, 00:30:33.481 "r_mbytes_per_sec": 0, 00:30:33.481 "w_mbytes_per_sec": 0 00:30:33.481 }, 00:30:33.481 "claimed": false, 00:30:33.481 "zoned": false, 00:30:33.481 "supported_io_types": { 00:30:33.481 "read": true, 00:30:33.481 "write": true, 00:30:33.481 "unmap": true, 00:30:33.481 "flush": true, 00:30:33.481 "reset": true, 00:30:33.481 "nvme_admin": true, 00:30:33.481 "nvme_io": true, 00:30:33.481 "nvme_io_md": false, 00:30:33.481 "write_zeroes": true, 00:30:33.481 "zcopy": false, 00:30:33.481 "get_zone_info": false, 00:30:33.481 "zone_management": false, 00:30:33.481 "zone_append": false, 00:30:33.481 "compare": false, 00:30:33.481 "compare_and_write": false, 00:30:33.481 "abort": true, 00:30:33.481 "seek_hole": false, 00:30:33.481 "seek_data": false, 00:30:33.481 "copy": false, 00:30:33.481 "nvme_iov_md": false 00:30:33.482 }, 00:30:33.482 "driver_specific": { 00:30:33.482 "nvme": [ 00:30:33.482 { 00:30:33.482 "pci_address": "0000:5e:00.0", 00:30:33.482 "trid": { 00:30:33.482 "trtype": "PCIe", 00:30:33.482 "traddr": "0000:5e:00.0" 00:30:33.482 }, 00:30:33.482 "ctrlr_data": { 00:30:33.482 "cntlid": 0, 00:30:33.482 "vendor_id": "0x8086", 00:30:33.482 "model_number": "INTEL SSDPE2KX010T8", 00:30:33.482 "serial_number": "BTLJ913602XE1P0FGN", 00:30:33.482 "firmware_revision": "VDV10184", 00:30:33.482 "oacs": { 00:30:33.482 "security": 0, 00:30:33.482 "format": 1, 00:30:33.482 "firmware": 1, 00:30:33.482 "ns_manage": 1 00:30:33.482 }, 00:30:33.482 "multi_ctrlr": false, 00:30:33.482 "ana_reporting": false 00:30:33.482 }, 00:30:33.482 "vs": { 00:30:33.482 "nvme_version": "1.2" 00:30:33.482 }, 00:30:33.482 "ns_data": { 00:30:33.482 "id": 1, 00:30:33.482 "can_share": false 00:30:33.482 } 00:30:33.482 } 00:30:33.482 ], 00:30:33.482 "mp_policy": "active_passive" 00:30:33.482 } 00:30:33.482 } 00:30:33.482 ] 00:30:33.482 18:45:18 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:30:33.482 18:45:18 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:30:33.739 [2024-07-15 18:45:19.227238] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xb0ff40 PMD being used: compress_qat 00:30:34.673 347b164c-0610-45af-b783-2c27b4dd8821 00:30:34.673 18:45:20 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:30:34.930 d8483477-4ddc-4840-8cd5-d8313df9472f 00:30:34.930 18:45:20 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:30:34.930 18:45:20 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:30:34.930 18:45:20 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:34.930 18:45:20 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:30:34.930 18:45:20 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:34.930 18:45:20 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:34.930 18:45:20 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:35.188 18:45:20 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:30:35.446 [ 00:30:35.446 { 00:30:35.446 "name": "d8483477-4ddc-4840-8cd5-d8313df9472f", 00:30:35.446 "aliases": [ 00:30:35.446 "lvs0/lv0" 00:30:35.446 ], 00:30:35.446 "product_name": "Logical Volume", 00:30:35.446 "block_size": 512, 00:30:35.446 "num_blocks": 204800, 00:30:35.446 "uuid": "d8483477-4ddc-4840-8cd5-d8313df9472f", 00:30:35.446 "assigned_rate_limits": { 00:30:35.446 "rw_ios_per_sec": 0, 00:30:35.446 "rw_mbytes_per_sec": 0, 00:30:35.446 "r_mbytes_per_sec": 0, 00:30:35.446 "w_mbytes_per_sec": 0 00:30:35.446 }, 00:30:35.446 "claimed": false, 00:30:35.446 "zoned": false, 00:30:35.446 "supported_io_types": { 00:30:35.446 "read": true, 00:30:35.446 "write": true, 00:30:35.446 "unmap": true, 00:30:35.446 "flush": false, 00:30:35.446 "reset": true, 00:30:35.446 "nvme_admin": false, 00:30:35.446 "nvme_io": false, 00:30:35.446 "nvme_io_md": false, 00:30:35.446 "write_zeroes": true, 00:30:35.446 "zcopy": false, 00:30:35.446 "get_zone_info": false, 00:30:35.446 "zone_management": false, 00:30:35.446 "zone_append": false, 00:30:35.446 "compare": false, 00:30:35.446 "compare_and_write": false, 00:30:35.446 "abort": false, 00:30:35.446 "seek_hole": true, 00:30:35.446 "seek_data": true, 00:30:35.446 "copy": false, 00:30:35.446 "nvme_iov_md": false 00:30:35.446 }, 00:30:35.446 "driver_specific": { 00:30:35.446 "lvol": { 00:30:35.446 "lvol_store_uuid": "347b164c-0610-45af-b783-2c27b4dd8821", 00:30:35.446 "base_bdev": "Nvme0n1", 00:30:35.446 "thin_provision": true, 00:30:35.446 "num_allocated_clusters": 0, 00:30:35.446 "snapshot": false, 00:30:35.446 "clone": false, 00:30:35.446 "esnap_clone": false 00:30:35.446 } 00:30:35.446 } 00:30:35.446 } 00:30:35.446 ] 00:30:35.446 18:45:20 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:30:35.446 18:45:20 compress_compdev -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:30:35.446 18:45:20 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:30:35.704 [2024-07-15 18:45:21.147009] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:30:35.704 COMP_lvs0/lv0 00:30:35.704 18:45:21 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:30:35.704 18:45:21 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:30:35.704 18:45:21 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:35.704 18:45:21 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:30:35.704 18:45:21 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:35.704 18:45:21 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:35.704 18:45:21 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:35.962 18:45:21 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:30:36.221 [ 00:30:36.221 { 00:30:36.221 "name": "COMP_lvs0/lv0", 00:30:36.221 "aliases": [ 00:30:36.221 "b0b5dbd0-2afc-5422-bfb6-a09a12948460" 00:30:36.221 ], 00:30:36.221 "product_name": "compress", 00:30:36.221 "block_size": 4096, 00:30:36.221 "num_blocks": 25088, 00:30:36.221 "uuid": "b0b5dbd0-2afc-5422-bfb6-a09a12948460", 00:30:36.221 "assigned_rate_limits": { 00:30:36.221 "rw_ios_per_sec": 0, 00:30:36.221 "rw_mbytes_per_sec": 0, 00:30:36.221 "r_mbytes_per_sec": 0, 00:30:36.221 "w_mbytes_per_sec": 0 00:30:36.221 }, 00:30:36.221 "claimed": false, 00:30:36.221 "zoned": false, 00:30:36.221 "supported_io_types": { 00:30:36.221 "read": true, 00:30:36.221 "write": true, 00:30:36.221 "unmap": false, 00:30:36.221 "flush": false, 00:30:36.221 "reset": false, 00:30:36.221 "nvme_admin": false, 00:30:36.221 "nvme_io": false, 00:30:36.221 "nvme_io_md": false, 00:30:36.221 "write_zeroes": true, 00:30:36.221 "zcopy": false, 00:30:36.221 "get_zone_info": false, 00:30:36.221 "zone_management": false, 00:30:36.221 "zone_append": false, 00:30:36.221 "compare": false, 00:30:36.221 "compare_and_write": false, 00:30:36.221 "abort": false, 00:30:36.221 "seek_hole": false, 00:30:36.221 "seek_data": false, 00:30:36.221 "copy": false, 00:30:36.221 "nvme_iov_md": false 00:30:36.221 }, 00:30:36.221 "driver_specific": { 00:30:36.221 "compress": { 00:30:36.221 "name": "COMP_lvs0/lv0", 00:30:36.221 "base_bdev_name": "d8483477-4ddc-4840-8cd5-d8313df9472f" 00:30:36.221 } 00:30:36.221 } 00:30:36.221 } 00:30:36.221 ] 00:30:36.221 18:45:21 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:30:36.221 18:45:21 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:36.492 [2024-07-15 18:45:21.784692] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f9e201b15c0 PMD being used: compress_qat 00:30:36.492 [2024-07-15 18:45:21.787653] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xaf8100 PMD being used: compress_qat 00:30:36.492 Running I/O for 3 seconds... 00:30:39.773 00:30:39.773 Latency(us) 00:30:39.773 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:39.773 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:30:39.773 Verification LBA range: start 0x0 length 0x3100 00:30:39.773 COMP_lvs0/lv0 : 3.01 1687.19 6.59 0.00 0.00 18847.32 255.51 22469.49 00:30:39.773 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:30:39.773 Verification LBA range: start 0x3100 length 0x3100 00:30:39.773 COMP_lvs0/lv0 : 3.02 1769.11 6.91 0.00 0.00 17941.05 233.08 21470.84 00:30:39.773 =================================================================================================================== 00:30:39.773 Total : 3456.30 13.50 0.00 0.00 18383.23 233.08 22469.49 00:30:39.773 0 00:30:39.773 18:45:24 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:30:39.773 18:45:24 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:30:39.773 18:45:25 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:30:40.031 18:45:25 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:30:40.031 18:45:25 compress_compdev -- compress/compress.sh@78 -- # killprocess 2967639 00:30:40.031 18:45:25 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 2967639 ']' 00:30:40.031 18:45:25 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 2967639 00:30:40.031 18:45:25 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:30:40.031 18:45:25 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:40.031 18:45:25 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2967639 00:30:40.031 18:45:25 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:40.031 18:45:25 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:40.031 18:45:25 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2967639' 00:30:40.031 killing process with pid 2967639 00:30:40.031 18:45:25 compress_compdev -- common/autotest_common.sh@967 -- # kill 2967639 00:30:40.031 Received shutdown signal, test time was about 3.000000 seconds 00:30:40.031 00:30:40.031 Latency(us) 00:30:40.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:40.031 =================================================================================================================== 00:30:40.031 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:40.031 18:45:25 compress_compdev -- common/autotest_common.sh@972 -- # wait 2967639 00:30:41.931 18:45:26 compress_compdev -- compress/compress.sh@89 -- # run_bdevio 00:30:41.931 18:45:26 compress_compdev -- compress/compress.sh@50 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:30:41.931 18:45:26 compress_compdev -- compress/compress.sh@55 -- # bdevio_pid=2969556 00:30:41.931 18:45:26 compress_compdev -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:41.931 18:45:26 compress_compdev -- compress/compress.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json -w 00:30:41.931 18:45:26 compress_compdev -- compress/compress.sh@57 -- # waitforlisten 2969556 00:30:41.931 18:45:26 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 2969556 ']' 00:30:41.931 18:45:26 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:41.931 18:45:26 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:41.931 18:45:26 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:41.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:41.931 18:45:26 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:41.931 18:45:26 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:30:41.931 [2024-07-15 18:45:27.039897] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:30:41.931 [2024-07-15 18:45:27.039970] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2969556 ] 00:30:41.931 [2024-07-15 18:45:27.139307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:41.931 [2024-07-15 18:45:27.236615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:41.931 [2024-07-15 18:45:27.236720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:41.931 [2024-07-15 18:45:27.236721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.498 [2024-07-15 18:45:27.781534] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:30:42.498 18:45:28 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:42.498 18:45:28 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:30:42.498 18:45:28 compress_compdev -- compress/compress.sh@58 -- # create_vols 00:30:42.498 18:45:28 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:42.498 18:45:28 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:45.774 [2024-07-15 18:45:31.023822] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1bb25c0 PMD being used: compress_qat 00:30:45.774 18:45:31 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:30:45.774 18:45:31 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:30:45.774 18:45:31 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:45.774 18:45:31 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:30:45.774 18:45:31 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:45.774 18:45:31 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:45.774 18:45:31 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:45.774 18:45:31 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:30:46.032 [ 00:30:46.032 { 00:30:46.032 "name": "Nvme0n1", 00:30:46.032 "aliases": [ 00:30:46.032 "578c9438-e24e-4c97-adb7-8670a36acd01" 00:30:46.032 ], 00:30:46.032 "product_name": "NVMe disk", 00:30:46.032 "block_size": 512, 00:30:46.032 "num_blocks": 1953525168, 00:30:46.032 "uuid": "578c9438-e24e-4c97-adb7-8670a36acd01", 00:30:46.032 "assigned_rate_limits": { 00:30:46.032 "rw_ios_per_sec": 0, 00:30:46.032 "rw_mbytes_per_sec": 0, 00:30:46.032 "r_mbytes_per_sec": 0, 00:30:46.032 "w_mbytes_per_sec": 0 00:30:46.032 }, 00:30:46.032 "claimed": false, 00:30:46.032 "zoned": false, 00:30:46.032 "supported_io_types": { 00:30:46.032 "read": true, 00:30:46.032 "write": true, 00:30:46.032 "unmap": true, 00:30:46.032 "flush": true, 00:30:46.032 "reset": true, 00:30:46.032 "nvme_admin": true, 00:30:46.032 "nvme_io": true, 00:30:46.032 "nvme_io_md": false, 00:30:46.032 "write_zeroes": true, 00:30:46.032 "zcopy": false, 00:30:46.032 "get_zone_info": false, 00:30:46.032 "zone_management": false, 00:30:46.032 "zone_append": false, 00:30:46.032 "compare": false, 00:30:46.032 "compare_and_write": false, 00:30:46.032 "abort": true, 00:30:46.032 "seek_hole": false, 00:30:46.032 "seek_data": false, 00:30:46.032 "copy": false, 00:30:46.032 "nvme_iov_md": false 00:30:46.032 }, 00:30:46.032 "driver_specific": { 00:30:46.032 "nvme": [ 00:30:46.032 { 00:30:46.032 "pci_address": "0000:5e:00.0", 00:30:46.032 "trid": { 00:30:46.032 "trtype": "PCIe", 00:30:46.032 "traddr": "0000:5e:00.0" 00:30:46.032 }, 00:30:46.032 "ctrlr_data": { 00:30:46.032 "cntlid": 0, 00:30:46.032 "vendor_id": "0x8086", 00:30:46.032 "model_number": "INTEL SSDPE2KX010T8", 00:30:46.032 "serial_number": "BTLJ913602XE1P0FGN", 00:30:46.032 "firmware_revision": "VDV10184", 00:30:46.032 "oacs": { 00:30:46.032 "security": 0, 00:30:46.032 "format": 1, 00:30:46.032 "firmware": 1, 00:30:46.032 "ns_manage": 1 00:30:46.032 }, 00:30:46.032 "multi_ctrlr": false, 00:30:46.032 "ana_reporting": false 00:30:46.032 }, 00:30:46.032 "vs": { 00:30:46.032 "nvme_version": "1.2" 00:30:46.032 }, 00:30:46.032 "ns_data": { 00:30:46.032 "id": 1, 00:30:46.032 "can_share": false 00:30:46.032 } 00:30:46.032 } 00:30:46.032 ], 00:30:46.032 "mp_policy": "active_passive" 00:30:46.032 } 00:30:46.032 } 00:30:46.032 ] 00:30:46.032 18:45:31 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:30:46.032 18:45:31 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:30:46.290 [2024-07-15 18:45:31.822367] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1a009e0 PMD being used: compress_qat 00:30:47.224 5378b969-149c-4c18-9ff6-e49a4efd9e10 00:30:47.224 18:45:32 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:30:47.481 68700e22-cbab-47c8-bf0b-66d14888fdc4 00:30:47.481 18:45:33 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:30:47.739 18:45:33 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:30:47.739 18:45:33 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:47.739 18:45:33 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:30:47.739 18:45:33 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:47.739 18:45:33 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:47.739 18:45:33 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:48.016 18:45:33 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:30:48.016 [ 00:30:48.016 { 00:30:48.016 "name": "68700e22-cbab-47c8-bf0b-66d14888fdc4", 00:30:48.016 "aliases": [ 00:30:48.016 "lvs0/lv0" 00:30:48.016 ], 00:30:48.016 "product_name": "Logical Volume", 00:30:48.016 "block_size": 512, 00:30:48.016 "num_blocks": 204800, 00:30:48.016 "uuid": "68700e22-cbab-47c8-bf0b-66d14888fdc4", 00:30:48.016 "assigned_rate_limits": { 00:30:48.016 "rw_ios_per_sec": 0, 00:30:48.016 "rw_mbytes_per_sec": 0, 00:30:48.016 "r_mbytes_per_sec": 0, 00:30:48.016 "w_mbytes_per_sec": 0 00:30:48.016 }, 00:30:48.016 "claimed": false, 00:30:48.016 "zoned": false, 00:30:48.016 "supported_io_types": { 00:30:48.016 "read": true, 00:30:48.016 "write": true, 00:30:48.016 "unmap": true, 00:30:48.016 "flush": false, 00:30:48.016 "reset": true, 00:30:48.016 "nvme_admin": false, 00:30:48.016 "nvme_io": false, 00:30:48.016 "nvme_io_md": false, 00:30:48.016 "write_zeroes": true, 00:30:48.016 "zcopy": false, 00:30:48.016 "get_zone_info": false, 00:30:48.016 "zone_management": false, 00:30:48.016 "zone_append": false, 00:30:48.016 "compare": false, 00:30:48.016 "compare_and_write": false, 00:30:48.016 "abort": false, 00:30:48.016 "seek_hole": true, 00:30:48.016 "seek_data": true, 00:30:48.016 "copy": false, 00:30:48.016 "nvme_iov_md": false 00:30:48.016 }, 00:30:48.016 "driver_specific": { 00:30:48.016 "lvol": { 00:30:48.016 "lvol_store_uuid": "5378b969-149c-4c18-9ff6-e49a4efd9e10", 00:30:48.016 "base_bdev": "Nvme0n1", 00:30:48.016 "thin_provision": true, 00:30:48.016 "num_allocated_clusters": 0, 00:30:48.016 "snapshot": false, 00:30:48.016 "clone": false, 00:30:48.016 "esnap_clone": false 00:30:48.016 } 00:30:48.016 } 00:30:48.016 } 00:30:48.016 ] 00:30:48.016 18:45:33 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:30:48.016 18:45:33 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:30:48.016 18:45:33 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:30:48.273 [2024-07-15 18:45:33.620940] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:30:48.273 COMP_lvs0/lv0 00:30:48.273 18:45:33 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:30:48.273 18:45:33 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:30:48.273 18:45:33 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:48.273 18:45:33 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:30:48.273 18:45:33 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:48.273 18:45:33 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:48.274 18:45:33 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:48.531 18:45:33 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:30:48.789 [ 00:30:48.789 { 00:30:48.789 "name": "COMP_lvs0/lv0", 00:30:48.789 "aliases": [ 00:30:48.789 "6fd5dc74-1fc9-55a7-8324-4c2a7a602850" 00:30:48.789 ], 00:30:48.789 "product_name": "compress", 00:30:48.790 "block_size": 512, 00:30:48.790 "num_blocks": 200704, 00:30:48.790 "uuid": "6fd5dc74-1fc9-55a7-8324-4c2a7a602850", 00:30:48.790 "assigned_rate_limits": { 00:30:48.790 "rw_ios_per_sec": 0, 00:30:48.790 "rw_mbytes_per_sec": 0, 00:30:48.790 "r_mbytes_per_sec": 0, 00:30:48.790 "w_mbytes_per_sec": 0 00:30:48.790 }, 00:30:48.790 "claimed": false, 00:30:48.790 "zoned": false, 00:30:48.790 "supported_io_types": { 00:30:48.790 "read": true, 00:30:48.790 "write": true, 00:30:48.790 "unmap": false, 00:30:48.790 "flush": false, 00:30:48.790 "reset": false, 00:30:48.790 "nvme_admin": false, 00:30:48.790 "nvme_io": false, 00:30:48.790 "nvme_io_md": false, 00:30:48.790 "write_zeroes": true, 00:30:48.790 "zcopy": false, 00:30:48.790 "get_zone_info": false, 00:30:48.790 "zone_management": false, 00:30:48.790 "zone_append": false, 00:30:48.790 "compare": false, 00:30:48.790 "compare_and_write": false, 00:30:48.790 "abort": false, 00:30:48.790 "seek_hole": false, 00:30:48.790 "seek_data": false, 00:30:48.790 "copy": false, 00:30:48.790 "nvme_iov_md": false 00:30:48.790 }, 00:30:48.790 "driver_specific": { 00:30:48.790 "compress": { 00:30:48.790 "name": "COMP_lvs0/lv0", 00:30:48.790 "base_bdev_name": "68700e22-cbab-47c8-bf0b-66d14888fdc4" 00:30:48.790 } 00:30:48.790 } 00:30:48.790 } 00:30:48.790 ] 00:30:48.790 18:45:34 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:30:48.790 18:45:34 compress_compdev -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:30:48.790 [2024-07-15 18:45:34.233520] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f4a881b1350 PMD being used: compress_qat 00:30:48.790 I/O targets: 00:30:48.790 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:30:48.790 00:30:48.790 00:30:48.790 CUnit - A unit testing framework for C - Version 2.1-3 00:30:48.790 http://cunit.sourceforge.net/ 00:30:48.790 00:30:48.790 00:30:48.790 Suite: bdevio tests on: COMP_lvs0/lv0 00:30:48.790 Test: blockdev write read block ...passed 00:30:48.790 Test: blockdev write zeroes read block ...passed 00:30:48.790 Test: blockdev write zeroes read no split ...passed 00:30:48.790 Test: blockdev write zeroes read split ...passed 00:30:49.048 Test: blockdev write zeroes read split partial ...passed 00:30:49.048 Test: blockdev reset ...[2024-07-15 18:45:34.349674] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:30:49.048 passed 00:30:49.048 Test: blockdev write read 8 blocks ...passed 00:30:49.048 Test: blockdev write read size > 128k ...passed 00:30:49.048 Test: blockdev write read invalid size ...passed 00:30:49.048 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:49.048 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:49.048 Test: blockdev write read max offset ...passed 00:30:49.048 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:49.048 Test: blockdev writev readv 8 blocks ...passed 00:30:49.048 Test: blockdev writev readv 30 x 1block ...passed 00:30:49.048 Test: blockdev writev readv block ...passed 00:30:49.048 Test: blockdev writev readv size > 128k ...passed 00:30:49.048 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:49.048 Test: blockdev comparev and writev ...passed 00:30:49.048 Test: blockdev nvme passthru rw ...passed 00:30:49.048 Test: blockdev nvme passthru vendor specific ...passed 00:30:49.048 Test: blockdev nvme admin passthru ...passed 00:30:49.048 Test: blockdev copy ...passed 00:30:49.048 00:30:49.048 Run Summary: Type Total Ran Passed Failed Inactive 00:30:49.048 suites 1 1 n/a 0 0 00:30:49.048 tests 23 23 23 0 0 00:30:49.048 asserts 130 130 130 0 n/a 00:30:49.048 00:30:49.048 Elapsed time = 0.316 seconds 00:30:49.048 0 00:30:49.048 18:45:34 compress_compdev -- compress/compress.sh@60 -- # destroy_vols 00:30:49.048 18:45:34 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:30:49.306 18:45:34 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:30:49.564 18:45:34 compress_compdev -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:30:49.564 18:45:34 compress_compdev -- compress/compress.sh@62 -- # killprocess 2969556 00:30:49.564 18:45:34 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 2969556 ']' 00:30:49.564 18:45:34 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 2969556 00:30:49.564 18:45:34 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:30:49.564 18:45:34 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:49.564 18:45:34 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2969556 00:30:49.564 18:45:34 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:49.564 18:45:34 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:49.564 18:45:34 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2969556' 00:30:49.564 killing process with pid 2969556 00:30:49.564 18:45:34 compress_compdev -- common/autotest_common.sh@967 -- # kill 2969556 00:30:49.564 18:45:34 compress_compdev -- common/autotest_common.sh@972 -- # wait 2969556 00:30:51.499 18:45:36 compress_compdev -- compress/compress.sh@91 -- # '[' 0 -eq 1 ']' 00:30:51.499 18:45:36 compress_compdev -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:30:51.499 00:30:51.499 real 0m47.525s 00:30:51.499 user 1m48.922s 00:30:51.499 sys 0m4.610s 00:30:51.499 18:45:36 compress_compdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:51.499 18:45:36 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:30:51.499 ************************************ 00:30:51.499 END TEST compress_compdev 00:30:51.499 ************************************ 00:30:51.499 18:45:36 -- common/autotest_common.sh@1142 -- # return 0 00:30:51.499 18:45:36 -- spdk/autotest.sh@349 -- # run_test compress_isal /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:30:51.499 18:45:36 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:51.499 18:45:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:51.499 18:45:36 -- common/autotest_common.sh@10 -- # set +x 00:30:51.499 ************************************ 00:30:51.499 START TEST compress_isal 00:30:51.499 ************************************ 00:30:51.499 18:45:36 compress_isal -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:30:51.499 * Looking for test storage... 00:30:51.499 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:30:51.499 18:45:36 compress_isal -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:30:51.499 18:45:36 compress_isal -- nvmf/common.sh@7 -- # uname -s 00:30:51.499 18:45:36 compress_isal -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:51.499 18:45:36 compress_isal -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:51.499 18:45:36 compress_isal -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:51.499 18:45:36 compress_isal -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:51.499 18:45:36 compress_isal -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:51.499 18:45:36 compress_isal -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:51.499 18:45:36 compress_isal -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:51.499 18:45:36 compress_isal -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:51.499 18:45:36 compress_isal -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:51.499 18:45:36 compress_isal -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:51.499 18:45:36 compress_isal -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b98b40-9a1d-eb11-906e-0017a4403562 00:30:51.499 18:45:36 compress_isal -- nvmf/common.sh@18 -- # NVME_HOSTID=80b98b40-9a1d-eb11-906e-0017a4403562 00:30:51.499 18:45:36 compress_isal -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:51.499 18:45:36 compress_isal -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:51.499 18:45:36 compress_isal -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:30:51.499 18:45:36 compress_isal -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:51.499 18:45:36 compress_isal -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:30:51.499 18:45:36 compress_isal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:51.499 18:45:36 compress_isal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:51.499 18:45:36 compress_isal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:51.499 18:45:36 compress_isal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.499 18:45:36 compress_isal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.499 18:45:36 compress_isal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.499 18:45:36 compress_isal -- paths/export.sh@5 -- # export PATH 00:30:51.499 18:45:36 compress_isal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.499 18:45:36 compress_isal -- nvmf/common.sh@47 -- # : 0 00:30:51.499 18:45:36 compress_isal -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:51.499 18:45:36 compress_isal -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:51.499 18:45:36 compress_isal -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:51.499 18:45:36 compress_isal -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:51.499 18:45:36 compress_isal -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:51.499 18:45:36 compress_isal -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:51.499 18:45:36 compress_isal -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:51.499 18:45:36 compress_isal -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:51.499 18:45:36 compress_isal -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:51.499 18:45:36 compress_isal -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:30:51.499 18:45:36 compress_isal -- compress/compress.sh@82 -- # test_type=isal 00:30:51.499 18:45:36 compress_isal -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:30:51.499 18:45:36 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:30:51.499 18:45:36 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=2971005 00:30:51.499 18:45:36 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:51.499 18:45:36 compress_isal -- compress/compress.sh@73 -- # waitforlisten 2971005 00:30:51.499 18:45:36 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:30:51.499 18:45:36 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 2971005 ']' 00:30:51.499 18:45:36 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:51.499 18:45:36 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:51.499 18:45:36 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:51.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:51.499 18:45:36 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:51.499 18:45:36 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:30:51.499 [2024-07-15 18:45:36.779171] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:30:51.499 [2024-07-15 18:45:36.779238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2971005 ] 00:30:51.499 [2024-07-15 18:45:36.884099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:51.499 [2024-07-15 18:45:36.996783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:51.499 [2024-07-15 18:45:36.996789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:52.470 18:45:37 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:52.470 18:45:37 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:30:52.470 18:45:37 compress_isal -- compress/compress.sh@74 -- # create_vols 00:30:52.470 18:45:37 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:52.470 18:45:37 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:55.755 18:45:41 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:30:55.755 18:45:41 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:30:55.755 18:45:41 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:55.755 18:45:41 compress_isal -- common/autotest_common.sh@899 -- # local i 00:30:55.755 18:45:41 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:55.755 18:45:41 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:55.755 18:45:41 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:56.320 18:45:41 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:30:56.320 [ 00:30:56.320 { 00:30:56.320 "name": "Nvme0n1", 00:30:56.320 "aliases": [ 00:30:56.321 "9fc0b7f7-2a94-4b7d-8d84-6f03589abe34" 00:30:56.321 ], 00:30:56.321 "product_name": "NVMe disk", 00:30:56.321 "block_size": 512, 00:30:56.321 "num_blocks": 1953525168, 00:30:56.321 "uuid": "9fc0b7f7-2a94-4b7d-8d84-6f03589abe34", 00:30:56.321 "assigned_rate_limits": { 00:30:56.321 "rw_ios_per_sec": 0, 00:30:56.321 "rw_mbytes_per_sec": 0, 00:30:56.321 "r_mbytes_per_sec": 0, 00:30:56.321 "w_mbytes_per_sec": 0 00:30:56.321 }, 00:30:56.321 "claimed": false, 00:30:56.321 "zoned": false, 00:30:56.321 "supported_io_types": { 00:30:56.321 "read": true, 00:30:56.321 "write": true, 00:30:56.321 "unmap": true, 00:30:56.321 "flush": true, 00:30:56.321 "reset": true, 00:30:56.321 "nvme_admin": true, 00:30:56.321 "nvme_io": true, 00:30:56.321 "nvme_io_md": false, 00:30:56.321 "write_zeroes": true, 00:30:56.321 "zcopy": false, 00:30:56.321 "get_zone_info": false, 00:30:56.321 "zone_management": false, 00:30:56.321 "zone_append": false, 00:30:56.321 "compare": false, 00:30:56.321 "compare_and_write": false, 00:30:56.321 "abort": true, 00:30:56.321 "seek_hole": false, 00:30:56.321 "seek_data": false, 00:30:56.321 "copy": false, 00:30:56.321 "nvme_iov_md": false 00:30:56.321 }, 00:30:56.321 "driver_specific": { 00:30:56.321 "nvme": [ 00:30:56.321 { 00:30:56.321 "pci_address": "0000:5e:00.0", 00:30:56.321 "trid": { 00:30:56.321 "trtype": "PCIe", 00:30:56.321 "traddr": "0000:5e:00.0" 00:30:56.321 }, 00:30:56.321 "ctrlr_data": { 00:30:56.321 "cntlid": 0, 00:30:56.321 "vendor_id": "0x8086", 00:30:56.321 "model_number": "INTEL SSDPE2KX010T8", 00:30:56.321 "serial_number": "BTLJ913602XE1P0FGN", 00:30:56.321 "firmware_revision": "VDV10184", 00:30:56.321 "oacs": { 00:30:56.321 "security": 0, 00:30:56.321 "format": 1, 00:30:56.321 "firmware": 1, 00:30:56.321 "ns_manage": 1 00:30:56.321 }, 00:30:56.321 "multi_ctrlr": false, 00:30:56.321 "ana_reporting": false 00:30:56.321 }, 00:30:56.321 "vs": { 00:30:56.321 "nvme_version": "1.2" 00:30:56.321 }, 00:30:56.321 "ns_data": { 00:30:56.321 "id": 1, 00:30:56.321 "can_share": false 00:30:56.321 } 00:30:56.321 } 00:30:56.321 ], 00:30:56.321 "mp_policy": "active_passive" 00:30:56.321 } 00:30:56.321 } 00:30:56.321 ] 00:30:56.321 18:45:41 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:30:56.321 18:45:41 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:30:57.696 55bb6240-7fb4-47c0-bde5-991191ddf68e 00:30:57.696 18:45:42 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:30:57.696 93ccc6db-0488-48aa-9e13-57cd971f60fd 00:30:57.696 18:45:43 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:30:57.696 18:45:43 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:30:57.696 18:45:43 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:57.696 18:45:43 compress_isal -- common/autotest_common.sh@899 -- # local i 00:30:57.696 18:45:43 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:57.696 18:45:43 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:57.696 18:45:43 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:57.954 18:45:43 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:30:58.211 [ 00:30:58.211 { 00:30:58.211 "name": "93ccc6db-0488-48aa-9e13-57cd971f60fd", 00:30:58.211 "aliases": [ 00:30:58.211 "lvs0/lv0" 00:30:58.211 ], 00:30:58.211 "product_name": "Logical Volume", 00:30:58.211 "block_size": 512, 00:30:58.211 "num_blocks": 204800, 00:30:58.211 "uuid": "93ccc6db-0488-48aa-9e13-57cd971f60fd", 00:30:58.211 "assigned_rate_limits": { 00:30:58.211 "rw_ios_per_sec": 0, 00:30:58.211 "rw_mbytes_per_sec": 0, 00:30:58.211 "r_mbytes_per_sec": 0, 00:30:58.211 "w_mbytes_per_sec": 0 00:30:58.211 }, 00:30:58.211 "claimed": false, 00:30:58.211 "zoned": false, 00:30:58.211 "supported_io_types": { 00:30:58.211 "read": true, 00:30:58.211 "write": true, 00:30:58.211 "unmap": true, 00:30:58.211 "flush": false, 00:30:58.211 "reset": true, 00:30:58.211 "nvme_admin": false, 00:30:58.211 "nvme_io": false, 00:30:58.211 "nvme_io_md": false, 00:30:58.211 "write_zeroes": true, 00:30:58.211 "zcopy": false, 00:30:58.211 "get_zone_info": false, 00:30:58.211 "zone_management": false, 00:30:58.211 "zone_append": false, 00:30:58.211 "compare": false, 00:30:58.211 "compare_and_write": false, 00:30:58.211 "abort": false, 00:30:58.211 "seek_hole": true, 00:30:58.211 "seek_data": true, 00:30:58.211 "copy": false, 00:30:58.211 "nvme_iov_md": false 00:30:58.211 }, 00:30:58.211 "driver_specific": { 00:30:58.211 "lvol": { 00:30:58.211 "lvol_store_uuid": "55bb6240-7fb4-47c0-bde5-991191ddf68e", 00:30:58.211 "base_bdev": "Nvme0n1", 00:30:58.211 "thin_provision": true, 00:30:58.211 "num_allocated_clusters": 0, 00:30:58.211 "snapshot": false, 00:30:58.211 "clone": false, 00:30:58.211 "esnap_clone": false 00:30:58.211 } 00:30:58.211 } 00:30:58.211 } 00:30:58.211 ] 00:30:58.211 18:45:43 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:30:58.211 18:45:43 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:30:58.211 18:45:43 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:30:58.776 [2024-07-15 18:45:44.185450] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:30:58.776 COMP_lvs0/lv0 00:30:58.776 18:45:44 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:30:58.776 18:45:44 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:30:58.776 18:45:44 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:58.776 18:45:44 compress_isal -- common/autotest_common.sh@899 -- # local i 00:30:58.776 18:45:44 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:58.776 18:45:44 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:58.776 18:45:44 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:59.033 18:45:44 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:30:59.291 [ 00:30:59.291 { 00:30:59.291 "name": "COMP_lvs0/lv0", 00:30:59.291 "aliases": [ 00:30:59.291 "1a4119f6-3074-5a73-b41d-c33a3bb5aa13" 00:30:59.291 ], 00:30:59.291 "product_name": "compress", 00:30:59.291 "block_size": 512, 00:30:59.291 "num_blocks": 200704, 00:30:59.291 "uuid": "1a4119f6-3074-5a73-b41d-c33a3bb5aa13", 00:30:59.291 "assigned_rate_limits": { 00:30:59.291 "rw_ios_per_sec": 0, 00:30:59.291 "rw_mbytes_per_sec": 0, 00:30:59.291 "r_mbytes_per_sec": 0, 00:30:59.291 "w_mbytes_per_sec": 0 00:30:59.291 }, 00:30:59.291 "claimed": false, 00:30:59.291 "zoned": false, 00:30:59.291 "supported_io_types": { 00:30:59.291 "read": true, 00:30:59.291 "write": true, 00:30:59.291 "unmap": false, 00:30:59.291 "flush": false, 00:30:59.291 "reset": false, 00:30:59.291 "nvme_admin": false, 00:30:59.291 "nvme_io": false, 00:30:59.291 "nvme_io_md": false, 00:30:59.291 "write_zeroes": true, 00:30:59.291 "zcopy": false, 00:30:59.291 "get_zone_info": false, 00:30:59.291 "zone_management": false, 00:30:59.291 "zone_append": false, 00:30:59.291 "compare": false, 00:30:59.291 "compare_and_write": false, 00:30:59.291 "abort": false, 00:30:59.291 "seek_hole": false, 00:30:59.291 "seek_data": false, 00:30:59.291 "copy": false, 00:30:59.291 "nvme_iov_md": false 00:30:59.291 }, 00:30:59.291 "driver_specific": { 00:30:59.291 "compress": { 00:30:59.291 "name": "COMP_lvs0/lv0", 00:30:59.291 "base_bdev_name": "93ccc6db-0488-48aa-9e13-57cd971f60fd" 00:30:59.291 } 00:30:59.291 } 00:30:59.291 } 00:30:59.291 ] 00:30:59.291 18:45:44 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:30:59.291 18:45:44 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:59.575 Running I/O for 3 seconds... 00:31:02.853 00:31:02.853 Latency(us) 00:31:02.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:02.853 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:31:02.853 Verification LBA range: start 0x0 length 0x3100 00:31:02.853 COMP_lvs0/lv0 : 3.01 1221.03 4.77 0.00 0.00 26084.34 175.54 27088.21 00:31:02.853 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:31:02.853 Verification LBA range: start 0x3100 length 0x3100 00:31:02.853 COMP_lvs0/lv0 : 3.01 1233.85 4.82 0.00 0.00 25771.42 259.41 26089.57 00:31:02.853 =================================================================================================================== 00:31:02.853 Total : 2454.88 9.59 0.00 0.00 25927.04 175.54 27088.21 00:31:02.853 0 00:31:02.854 18:45:48 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:31:02.854 18:45:48 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:31:03.112 18:45:48 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:31:03.370 18:45:48 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:31:03.370 18:45:48 compress_isal -- compress/compress.sh@78 -- # killprocess 2971005 00:31:03.370 18:45:48 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 2971005 ']' 00:31:03.370 18:45:48 compress_isal -- common/autotest_common.sh@952 -- # kill -0 2971005 00:31:03.370 18:45:48 compress_isal -- common/autotest_common.sh@953 -- # uname 00:31:03.370 18:45:48 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:03.370 18:45:48 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2971005 00:31:03.370 18:45:48 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:03.370 18:45:48 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:03.370 18:45:48 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2971005' 00:31:03.370 killing process with pid 2971005 00:31:03.370 18:45:48 compress_isal -- common/autotest_common.sh@967 -- # kill 2971005 00:31:03.370 Received shutdown signal, test time was about 3.000000 seconds 00:31:03.370 00:31:03.370 Latency(us) 00:31:03.370 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:03.370 =================================================================================================================== 00:31:03.370 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:03.370 18:45:48 compress_isal -- common/autotest_common.sh@972 -- # wait 2971005 00:31:05.272 18:45:50 compress_isal -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:31:05.272 18:45:50 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:31:05.272 18:45:50 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=2973068 00:31:05.272 18:45:50 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:05.272 18:45:50 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:31:05.272 18:45:50 compress_isal -- compress/compress.sh@73 -- # waitforlisten 2973068 00:31:05.272 18:45:50 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 2973068 ']' 00:31:05.272 18:45:50 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:05.272 18:45:50 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:05.272 18:45:50 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:05.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:05.272 18:45:50 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:05.272 18:45:50 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:31:05.272 [2024-07-15 18:45:50.377807] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:31:05.272 [2024-07-15 18:45:50.377869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2973068 ] 00:31:05.272 [2024-07-15 18:45:50.483191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:05.272 [2024-07-15 18:45:50.595172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:05.272 [2024-07-15 18:45:50.595179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:05.839 18:45:51 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:05.839 18:45:51 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:31:05.839 18:45:51 compress_isal -- compress/compress.sh@74 -- # create_vols 512 00:31:05.839 18:45:51 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:05.839 18:45:51 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:09.123 18:45:54 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:31:09.123 18:45:54 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:31:09.123 18:45:54 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:09.123 18:45:54 compress_isal -- common/autotest_common.sh@899 -- # local i 00:31:09.123 18:45:54 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:09.123 18:45:54 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:09.123 18:45:54 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:09.381 18:45:54 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:31:09.640 [ 00:31:09.640 { 00:31:09.640 "name": "Nvme0n1", 00:31:09.640 "aliases": [ 00:31:09.640 "568e9493-77c7-476f-adc1-c99f8f7fc1ff" 00:31:09.640 ], 00:31:09.640 "product_name": "NVMe disk", 00:31:09.640 "block_size": 512, 00:31:09.640 "num_blocks": 1953525168, 00:31:09.640 "uuid": "568e9493-77c7-476f-adc1-c99f8f7fc1ff", 00:31:09.640 "assigned_rate_limits": { 00:31:09.640 "rw_ios_per_sec": 0, 00:31:09.640 "rw_mbytes_per_sec": 0, 00:31:09.640 "r_mbytes_per_sec": 0, 00:31:09.640 "w_mbytes_per_sec": 0 00:31:09.640 }, 00:31:09.640 "claimed": false, 00:31:09.640 "zoned": false, 00:31:09.640 "supported_io_types": { 00:31:09.640 "read": true, 00:31:09.640 "write": true, 00:31:09.640 "unmap": true, 00:31:09.640 "flush": true, 00:31:09.640 "reset": true, 00:31:09.640 "nvme_admin": true, 00:31:09.640 "nvme_io": true, 00:31:09.640 "nvme_io_md": false, 00:31:09.640 "write_zeroes": true, 00:31:09.640 "zcopy": false, 00:31:09.640 "get_zone_info": false, 00:31:09.640 "zone_management": false, 00:31:09.640 "zone_append": false, 00:31:09.640 "compare": false, 00:31:09.640 "compare_and_write": false, 00:31:09.640 "abort": true, 00:31:09.640 "seek_hole": false, 00:31:09.640 "seek_data": false, 00:31:09.640 "copy": false, 00:31:09.640 "nvme_iov_md": false 00:31:09.640 }, 00:31:09.640 "driver_specific": { 00:31:09.640 "nvme": [ 00:31:09.640 { 00:31:09.640 "pci_address": "0000:5e:00.0", 00:31:09.640 "trid": { 00:31:09.640 "trtype": "PCIe", 00:31:09.640 "traddr": "0000:5e:00.0" 00:31:09.640 }, 00:31:09.640 "ctrlr_data": { 00:31:09.640 "cntlid": 0, 00:31:09.640 "vendor_id": "0x8086", 00:31:09.640 "model_number": "INTEL SSDPE2KX010T8", 00:31:09.640 "serial_number": "BTLJ913602XE1P0FGN", 00:31:09.640 "firmware_revision": "VDV10184", 00:31:09.640 "oacs": { 00:31:09.640 "security": 0, 00:31:09.640 "format": 1, 00:31:09.640 "firmware": 1, 00:31:09.640 "ns_manage": 1 00:31:09.640 }, 00:31:09.640 "multi_ctrlr": false, 00:31:09.640 "ana_reporting": false 00:31:09.640 }, 00:31:09.640 "vs": { 00:31:09.640 "nvme_version": "1.2" 00:31:09.640 }, 00:31:09.640 "ns_data": { 00:31:09.640 "id": 1, 00:31:09.640 "can_share": false 00:31:09.640 } 00:31:09.640 } 00:31:09.640 ], 00:31:09.640 "mp_policy": "active_passive" 00:31:09.640 } 00:31:09.640 } 00:31:09.640 ] 00:31:09.640 18:45:55 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:31:09.640 18:45:55 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:31:11.020 050f9ef2-28d9-4e89-846b-a62d41924276 00:31:11.020 18:45:56 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:31:11.020 a9c0c0f3-1505-43a9-a558-1427dc6ede9a 00:31:11.020 18:45:56 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:31:11.020 18:45:56 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:31:11.020 18:45:56 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:11.020 18:45:56 compress_isal -- common/autotest_common.sh@899 -- # local i 00:31:11.020 18:45:56 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:11.020 18:45:56 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:11.020 18:45:56 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:11.278 18:45:56 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:31:11.536 [ 00:31:11.536 { 00:31:11.536 "name": "a9c0c0f3-1505-43a9-a558-1427dc6ede9a", 00:31:11.536 "aliases": [ 00:31:11.536 "lvs0/lv0" 00:31:11.536 ], 00:31:11.536 "product_name": "Logical Volume", 00:31:11.536 "block_size": 512, 00:31:11.536 "num_blocks": 204800, 00:31:11.536 "uuid": "a9c0c0f3-1505-43a9-a558-1427dc6ede9a", 00:31:11.536 "assigned_rate_limits": { 00:31:11.537 "rw_ios_per_sec": 0, 00:31:11.537 "rw_mbytes_per_sec": 0, 00:31:11.537 "r_mbytes_per_sec": 0, 00:31:11.537 "w_mbytes_per_sec": 0 00:31:11.537 }, 00:31:11.537 "claimed": false, 00:31:11.537 "zoned": false, 00:31:11.537 "supported_io_types": { 00:31:11.537 "read": true, 00:31:11.537 "write": true, 00:31:11.537 "unmap": true, 00:31:11.537 "flush": false, 00:31:11.537 "reset": true, 00:31:11.537 "nvme_admin": false, 00:31:11.537 "nvme_io": false, 00:31:11.537 "nvme_io_md": false, 00:31:11.537 "write_zeroes": true, 00:31:11.537 "zcopy": false, 00:31:11.537 "get_zone_info": false, 00:31:11.537 "zone_management": false, 00:31:11.537 "zone_append": false, 00:31:11.537 "compare": false, 00:31:11.537 "compare_and_write": false, 00:31:11.537 "abort": false, 00:31:11.537 "seek_hole": true, 00:31:11.537 "seek_data": true, 00:31:11.537 "copy": false, 00:31:11.537 "nvme_iov_md": false 00:31:11.537 }, 00:31:11.537 "driver_specific": { 00:31:11.537 "lvol": { 00:31:11.537 "lvol_store_uuid": "050f9ef2-28d9-4e89-846b-a62d41924276", 00:31:11.537 "base_bdev": "Nvme0n1", 00:31:11.537 "thin_provision": true, 00:31:11.537 "num_allocated_clusters": 0, 00:31:11.537 "snapshot": false, 00:31:11.537 "clone": false, 00:31:11.537 "esnap_clone": false 00:31:11.537 } 00:31:11.537 } 00:31:11.537 } 00:31:11.537 ] 00:31:11.795 18:45:57 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:31:11.795 18:45:57 compress_isal -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:31:11.795 18:45:57 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:31:11.795 [2024-07-15 18:45:57.338059] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:31:11.795 COMP_lvs0/lv0 00:31:12.053 18:45:57 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:31:12.053 18:45:57 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:31:12.053 18:45:57 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:12.053 18:45:57 compress_isal -- common/autotest_common.sh@899 -- # local i 00:31:12.053 18:45:57 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:12.053 18:45:57 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:12.053 18:45:57 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:12.311 18:45:57 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:31:12.569 [ 00:31:12.569 { 00:31:12.569 "name": "COMP_lvs0/lv0", 00:31:12.569 "aliases": [ 00:31:12.569 "8532d09e-989c-57dc-ba32-6b6786f1b052" 00:31:12.569 ], 00:31:12.569 "product_name": "compress", 00:31:12.569 "block_size": 512, 00:31:12.569 "num_blocks": 200704, 00:31:12.569 "uuid": "8532d09e-989c-57dc-ba32-6b6786f1b052", 00:31:12.569 "assigned_rate_limits": { 00:31:12.569 "rw_ios_per_sec": 0, 00:31:12.569 "rw_mbytes_per_sec": 0, 00:31:12.569 "r_mbytes_per_sec": 0, 00:31:12.569 "w_mbytes_per_sec": 0 00:31:12.569 }, 00:31:12.569 "claimed": false, 00:31:12.569 "zoned": false, 00:31:12.569 "supported_io_types": { 00:31:12.569 "read": true, 00:31:12.569 "write": true, 00:31:12.569 "unmap": false, 00:31:12.569 "flush": false, 00:31:12.569 "reset": false, 00:31:12.569 "nvme_admin": false, 00:31:12.569 "nvme_io": false, 00:31:12.569 "nvme_io_md": false, 00:31:12.569 "write_zeroes": true, 00:31:12.569 "zcopy": false, 00:31:12.569 "get_zone_info": false, 00:31:12.569 "zone_management": false, 00:31:12.569 "zone_append": false, 00:31:12.569 "compare": false, 00:31:12.569 "compare_and_write": false, 00:31:12.569 "abort": false, 00:31:12.569 "seek_hole": false, 00:31:12.569 "seek_data": false, 00:31:12.569 "copy": false, 00:31:12.569 "nvme_iov_md": false 00:31:12.569 }, 00:31:12.569 "driver_specific": { 00:31:12.569 "compress": { 00:31:12.569 "name": "COMP_lvs0/lv0", 00:31:12.569 "base_bdev_name": "a9c0c0f3-1505-43a9-a558-1427dc6ede9a" 00:31:12.569 } 00:31:12.569 } 00:31:12.569 } 00:31:12.569 ] 00:31:12.569 18:45:57 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:31:12.569 18:45:57 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:31:12.569 Running I/O for 3 seconds... 00:31:15.882 00:31:15.882 Latency(us) 00:31:15.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:15.882 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:31:15.882 Verification LBA range: start 0x0 length 0x3100 00:31:15.882 COMP_lvs0/lv0 : 3.01 1793.98 7.01 0.00 0.00 17708.45 116.05 21720.50 00:31:15.882 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:31:15.882 Verification LBA range: start 0x3100 length 0x3100 00:31:15.882 COMP_lvs0/lv0 : 3.01 1784.22 6.97 0.00 0.00 17785.84 120.44 20721.86 00:31:15.882 =================================================================================================================== 00:31:15.882 Total : 3578.20 13.98 0.00 0.00 17747.02 116.05 21720.50 00:31:15.882 0 00:31:15.882 18:46:01 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:31:15.882 18:46:01 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:31:15.882 18:46:01 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:31:16.140 18:46:01 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:31:16.140 18:46:01 compress_isal -- compress/compress.sh@78 -- # killprocess 2973068 00:31:16.140 18:46:01 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 2973068 ']' 00:31:16.140 18:46:01 compress_isal -- common/autotest_common.sh@952 -- # kill -0 2973068 00:31:16.140 18:46:01 compress_isal -- common/autotest_common.sh@953 -- # uname 00:31:16.140 18:46:01 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:16.140 18:46:01 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2973068 00:31:16.140 18:46:01 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:16.140 18:46:01 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:16.140 18:46:01 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2973068' 00:31:16.140 killing process with pid 2973068 00:31:16.140 18:46:01 compress_isal -- common/autotest_common.sh@967 -- # kill 2973068 00:31:16.140 Received shutdown signal, test time was about 3.000000 seconds 00:31:16.140 00:31:16.140 Latency(us) 00:31:16.140 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:16.140 =================================================================================================================== 00:31:16.140 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:16.140 18:46:01 compress_isal -- common/autotest_common.sh@972 -- # wait 2973068 00:31:18.042 18:46:03 compress_isal -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:31:18.042 18:46:03 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:31:18.042 18:46:03 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=2975006 00:31:18.042 18:46:03 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:31:18.042 18:46:03 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:18.042 18:46:03 compress_isal -- compress/compress.sh@73 -- # waitforlisten 2975006 00:31:18.042 18:46:03 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 2975006 ']' 00:31:18.042 18:46:03 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:18.042 18:46:03 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:18.042 18:46:03 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:18.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:18.042 18:46:03 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:18.042 18:46:03 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:31:18.042 [2024-07-15 18:46:03.261510] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:31:18.042 [2024-07-15 18:46:03.261590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2975006 ] 00:31:18.042 [2024-07-15 18:46:03.375006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:18.042 [2024-07-15 18:46:03.482963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:18.042 [2024-07-15 18:46:03.482990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:18.975 18:46:04 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:18.975 18:46:04 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:31:18.975 18:46:04 compress_isal -- compress/compress.sh@74 -- # create_vols 4096 00:31:18.975 18:46:04 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:18.975 18:46:04 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:22.253 18:46:07 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:31:22.253 18:46:07 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:31:22.253 18:46:07 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:22.253 18:46:07 compress_isal -- common/autotest_common.sh@899 -- # local i 00:31:22.253 18:46:07 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:22.253 18:46:07 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:22.253 18:46:07 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:22.253 18:46:07 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:31:22.511 [ 00:31:22.511 { 00:31:22.511 "name": "Nvme0n1", 00:31:22.511 "aliases": [ 00:31:22.511 "95616ac5-8224-411c-b0b2-54be34261846" 00:31:22.511 ], 00:31:22.511 "product_name": "NVMe disk", 00:31:22.511 "block_size": 512, 00:31:22.511 "num_blocks": 1953525168, 00:31:22.511 "uuid": "95616ac5-8224-411c-b0b2-54be34261846", 00:31:22.511 "assigned_rate_limits": { 00:31:22.511 "rw_ios_per_sec": 0, 00:31:22.511 "rw_mbytes_per_sec": 0, 00:31:22.511 "r_mbytes_per_sec": 0, 00:31:22.511 "w_mbytes_per_sec": 0 00:31:22.511 }, 00:31:22.511 "claimed": false, 00:31:22.511 "zoned": false, 00:31:22.511 "supported_io_types": { 00:31:22.511 "read": true, 00:31:22.511 "write": true, 00:31:22.511 "unmap": true, 00:31:22.511 "flush": true, 00:31:22.511 "reset": true, 00:31:22.511 "nvme_admin": true, 00:31:22.511 "nvme_io": true, 00:31:22.511 "nvme_io_md": false, 00:31:22.511 "write_zeroes": true, 00:31:22.511 "zcopy": false, 00:31:22.511 "get_zone_info": false, 00:31:22.511 "zone_management": false, 00:31:22.511 "zone_append": false, 00:31:22.511 "compare": false, 00:31:22.511 "compare_and_write": false, 00:31:22.511 "abort": true, 00:31:22.511 "seek_hole": false, 00:31:22.511 "seek_data": false, 00:31:22.511 "copy": false, 00:31:22.511 "nvme_iov_md": false 00:31:22.511 }, 00:31:22.511 "driver_specific": { 00:31:22.511 "nvme": [ 00:31:22.511 { 00:31:22.511 "pci_address": "0000:5e:00.0", 00:31:22.511 "trid": { 00:31:22.511 "trtype": "PCIe", 00:31:22.511 "traddr": "0000:5e:00.0" 00:31:22.511 }, 00:31:22.511 "ctrlr_data": { 00:31:22.511 "cntlid": 0, 00:31:22.511 "vendor_id": "0x8086", 00:31:22.511 "model_number": "INTEL SSDPE2KX010T8", 00:31:22.511 "serial_number": "BTLJ913602XE1P0FGN", 00:31:22.511 "firmware_revision": "VDV10184", 00:31:22.511 "oacs": { 00:31:22.511 "security": 0, 00:31:22.511 "format": 1, 00:31:22.511 "firmware": 1, 00:31:22.511 "ns_manage": 1 00:31:22.511 }, 00:31:22.511 "multi_ctrlr": false, 00:31:22.512 "ana_reporting": false 00:31:22.512 }, 00:31:22.512 "vs": { 00:31:22.512 "nvme_version": "1.2" 00:31:22.512 }, 00:31:22.512 "ns_data": { 00:31:22.512 "id": 1, 00:31:22.512 "can_share": false 00:31:22.512 } 00:31:22.512 } 00:31:22.512 ], 00:31:22.512 "mp_policy": "active_passive" 00:31:22.512 } 00:31:22.512 } 00:31:22.512 ] 00:31:22.512 18:46:07 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:31:22.512 18:46:07 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:31:23.887 f9581d2d-e8c4-4d03-9c54-e11a555b668b 00:31:23.887 18:46:09 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:31:23.887 74011eae-b7f5-4da4-a694-43460e626d8d 00:31:23.887 18:46:09 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:31:23.887 18:46:09 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:31:23.887 18:46:09 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:23.887 18:46:09 compress_isal -- common/autotest_common.sh@899 -- # local i 00:31:23.887 18:46:09 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:23.887 18:46:09 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:23.887 18:46:09 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:24.149 18:46:09 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:31:24.405 [ 00:31:24.405 { 00:31:24.405 "name": "74011eae-b7f5-4da4-a694-43460e626d8d", 00:31:24.405 "aliases": [ 00:31:24.405 "lvs0/lv0" 00:31:24.405 ], 00:31:24.405 "product_name": "Logical Volume", 00:31:24.405 "block_size": 512, 00:31:24.405 "num_blocks": 204800, 00:31:24.405 "uuid": "74011eae-b7f5-4da4-a694-43460e626d8d", 00:31:24.405 "assigned_rate_limits": { 00:31:24.405 "rw_ios_per_sec": 0, 00:31:24.405 "rw_mbytes_per_sec": 0, 00:31:24.405 "r_mbytes_per_sec": 0, 00:31:24.405 "w_mbytes_per_sec": 0 00:31:24.405 }, 00:31:24.405 "claimed": false, 00:31:24.405 "zoned": false, 00:31:24.405 "supported_io_types": { 00:31:24.405 "read": true, 00:31:24.405 "write": true, 00:31:24.405 "unmap": true, 00:31:24.405 "flush": false, 00:31:24.405 "reset": true, 00:31:24.405 "nvme_admin": false, 00:31:24.405 "nvme_io": false, 00:31:24.405 "nvme_io_md": false, 00:31:24.405 "write_zeroes": true, 00:31:24.405 "zcopy": false, 00:31:24.405 "get_zone_info": false, 00:31:24.405 "zone_management": false, 00:31:24.405 "zone_append": false, 00:31:24.405 "compare": false, 00:31:24.405 "compare_and_write": false, 00:31:24.405 "abort": false, 00:31:24.405 "seek_hole": true, 00:31:24.405 "seek_data": true, 00:31:24.405 "copy": false, 00:31:24.405 "nvme_iov_md": false 00:31:24.405 }, 00:31:24.405 "driver_specific": { 00:31:24.405 "lvol": { 00:31:24.405 "lvol_store_uuid": "f9581d2d-e8c4-4d03-9c54-e11a555b668b", 00:31:24.405 "base_bdev": "Nvme0n1", 00:31:24.405 "thin_provision": true, 00:31:24.405 "num_allocated_clusters": 0, 00:31:24.405 "snapshot": false, 00:31:24.405 "clone": false, 00:31:24.405 "esnap_clone": false 00:31:24.405 } 00:31:24.405 } 00:31:24.405 } 00:31:24.405 ] 00:31:24.405 18:46:09 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:31:24.405 18:46:09 compress_isal -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:31:24.405 18:46:09 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:31:24.661 [2024-07-15 18:46:10.107514] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:31:24.661 COMP_lvs0/lv0 00:31:24.661 18:46:10 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:31:24.661 18:46:10 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:31:24.661 18:46:10 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:24.661 18:46:10 compress_isal -- common/autotest_common.sh@899 -- # local i 00:31:24.661 18:46:10 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:24.661 18:46:10 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:24.662 18:46:10 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:24.918 18:46:10 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:31:25.175 [ 00:31:25.175 { 00:31:25.175 "name": "COMP_lvs0/lv0", 00:31:25.175 "aliases": [ 00:31:25.175 "f72ba1fc-da93-56ab-9138-9470be25b410" 00:31:25.175 ], 00:31:25.175 "product_name": "compress", 00:31:25.175 "block_size": 4096, 00:31:25.175 "num_blocks": 25088, 00:31:25.175 "uuid": "f72ba1fc-da93-56ab-9138-9470be25b410", 00:31:25.175 "assigned_rate_limits": { 00:31:25.175 "rw_ios_per_sec": 0, 00:31:25.175 "rw_mbytes_per_sec": 0, 00:31:25.175 "r_mbytes_per_sec": 0, 00:31:25.175 "w_mbytes_per_sec": 0 00:31:25.175 }, 00:31:25.175 "claimed": false, 00:31:25.175 "zoned": false, 00:31:25.175 "supported_io_types": { 00:31:25.175 "read": true, 00:31:25.175 "write": true, 00:31:25.175 "unmap": false, 00:31:25.175 "flush": false, 00:31:25.175 "reset": false, 00:31:25.175 "nvme_admin": false, 00:31:25.175 "nvme_io": false, 00:31:25.175 "nvme_io_md": false, 00:31:25.175 "write_zeroes": true, 00:31:25.175 "zcopy": false, 00:31:25.175 "get_zone_info": false, 00:31:25.175 "zone_management": false, 00:31:25.175 "zone_append": false, 00:31:25.175 "compare": false, 00:31:25.175 "compare_and_write": false, 00:31:25.175 "abort": false, 00:31:25.175 "seek_hole": false, 00:31:25.175 "seek_data": false, 00:31:25.175 "copy": false, 00:31:25.175 "nvme_iov_md": false 00:31:25.175 }, 00:31:25.175 "driver_specific": { 00:31:25.175 "compress": { 00:31:25.175 "name": "COMP_lvs0/lv0", 00:31:25.175 "base_bdev_name": "74011eae-b7f5-4da4-a694-43460e626d8d" 00:31:25.175 } 00:31:25.175 } 00:31:25.175 } 00:31:25.175 ] 00:31:25.175 18:46:10 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:31:25.175 18:46:10 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:31:25.432 Running I/O for 3 seconds... 00:31:28.710 00:31:28.711 Latency(us) 00:31:28.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:28.711 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:31:28.711 Verification LBA range: start 0x0 length 0x3100 00:31:28.711 COMP_lvs0/lv0 : 3.01 1791.00 7.00 0.00 0.00 17733.43 118.49 20347.37 00:31:28.711 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:31:28.711 Verification LBA range: start 0x3100 length 0x3100 00:31:28.711 COMP_lvs0/lv0 : 3.02 1802.79 7.04 0.00 0.00 17581.22 118.00 20971.52 00:31:28.711 =================================================================================================================== 00:31:28.711 Total : 3593.80 14.04 0.00 0.00 17657.06 118.00 20971.52 00:31:28.711 0 00:31:28.711 18:46:13 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:31:28.711 18:46:13 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:31:28.711 18:46:14 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:31:28.967 18:46:14 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:31:28.967 18:46:14 compress_isal -- compress/compress.sh@78 -- # killprocess 2975006 00:31:28.967 18:46:14 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 2975006 ']' 00:31:28.967 18:46:14 compress_isal -- common/autotest_common.sh@952 -- # kill -0 2975006 00:31:28.967 18:46:14 compress_isal -- common/autotest_common.sh@953 -- # uname 00:31:28.967 18:46:14 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:28.967 18:46:14 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2975006 00:31:28.967 18:46:14 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:28.967 18:46:14 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:28.967 18:46:14 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2975006' 00:31:28.967 killing process with pid 2975006 00:31:28.967 18:46:14 compress_isal -- common/autotest_common.sh@967 -- # kill 2975006 00:31:28.967 Received shutdown signal, test time was about 3.000000 seconds 00:31:28.967 00:31:28.967 Latency(us) 00:31:28.967 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:28.967 =================================================================================================================== 00:31:28.967 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:28.967 18:46:14 compress_isal -- common/autotest_common.sh@972 -- # wait 2975006 00:31:30.864 18:46:15 compress_isal -- compress/compress.sh@89 -- # run_bdevio 00:31:30.864 18:46:15 compress_isal -- compress/compress.sh@50 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:31:30.864 18:46:15 compress_isal -- compress/compress.sh@55 -- # bdevio_pid=2976935 00:31:30.864 18:46:15 compress_isal -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:30.864 18:46:15 compress_isal -- compress/compress.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w 00:31:30.864 18:46:15 compress_isal -- compress/compress.sh@57 -- # waitforlisten 2976935 00:31:30.864 18:46:15 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 2976935 ']' 00:31:30.864 18:46:15 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:30.864 18:46:15 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:30.864 18:46:15 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:30.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:30.864 18:46:15 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:30.864 18:46:15 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:31:30.864 [2024-07-15 18:46:15.974749] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:31:30.864 [2024-07-15 18:46:15.974809] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2976935 ] 00:31:30.864 [2024-07-15 18:46:16.075895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:30.864 [2024-07-15 18:46:16.173130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:30.864 [2024-07-15 18:46:16.173235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:30.864 [2024-07-15 18:46:16.173235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:31.429 18:46:16 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:31.429 18:46:16 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:31:31.429 18:46:16 compress_isal -- compress/compress.sh@58 -- # create_vols 00:31:31.429 18:46:16 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:31.429 18:46:16 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:34.710 18:46:20 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:31:34.710 18:46:20 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:31:34.710 18:46:20 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:34.710 18:46:20 compress_isal -- common/autotest_common.sh@899 -- # local i 00:31:34.710 18:46:20 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:34.710 18:46:20 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:34.710 18:46:20 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:34.968 18:46:20 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:31:35.226 [ 00:31:35.226 { 00:31:35.226 "name": "Nvme0n1", 00:31:35.226 "aliases": [ 00:31:35.226 "d3cb66ba-60d8-4e4a-9a59-544dc5cd8860" 00:31:35.226 ], 00:31:35.226 "product_name": "NVMe disk", 00:31:35.226 "block_size": 512, 00:31:35.226 "num_blocks": 1953525168, 00:31:35.226 "uuid": "d3cb66ba-60d8-4e4a-9a59-544dc5cd8860", 00:31:35.226 "assigned_rate_limits": { 00:31:35.226 "rw_ios_per_sec": 0, 00:31:35.226 "rw_mbytes_per_sec": 0, 00:31:35.226 "r_mbytes_per_sec": 0, 00:31:35.226 "w_mbytes_per_sec": 0 00:31:35.226 }, 00:31:35.226 "claimed": false, 00:31:35.226 "zoned": false, 00:31:35.226 "supported_io_types": { 00:31:35.226 "read": true, 00:31:35.226 "write": true, 00:31:35.226 "unmap": true, 00:31:35.226 "flush": true, 00:31:35.226 "reset": true, 00:31:35.226 "nvme_admin": true, 00:31:35.226 "nvme_io": true, 00:31:35.226 "nvme_io_md": false, 00:31:35.226 "write_zeroes": true, 00:31:35.226 "zcopy": false, 00:31:35.226 "get_zone_info": false, 00:31:35.226 "zone_management": false, 00:31:35.226 "zone_append": false, 00:31:35.226 "compare": false, 00:31:35.226 "compare_and_write": false, 00:31:35.226 "abort": true, 00:31:35.226 "seek_hole": false, 00:31:35.226 "seek_data": false, 00:31:35.226 "copy": false, 00:31:35.226 "nvme_iov_md": false 00:31:35.226 }, 00:31:35.226 "driver_specific": { 00:31:35.226 "nvme": [ 00:31:35.226 { 00:31:35.226 "pci_address": "0000:5e:00.0", 00:31:35.226 "trid": { 00:31:35.226 "trtype": "PCIe", 00:31:35.226 "traddr": "0000:5e:00.0" 00:31:35.226 }, 00:31:35.226 "ctrlr_data": { 00:31:35.226 "cntlid": 0, 00:31:35.226 "vendor_id": "0x8086", 00:31:35.226 "model_number": "INTEL SSDPE2KX010T8", 00:31:35.226 "serial_number": "BTLJ913602XE1P0FGN", 00:31:35.226 "firmware_revision": "VDV10184", 00:31:35.226 "oacs": { 00:31:35.226 "security": 0, 00:31:35.226 "format": 1, 00:31:35.226 "firmware": 1, 00:31:35.226 "ns_manage": 1 00:31:35.226 }, 00:31:35.226 "multi_ctrlr": false, 00:31:35.226 "ana_reporting": false 00:31:35.226 }, 00:31:35.226 "vs": { 00:31:35.226 "nvme_version": "1.2" 00:31:35.226 }, 00:31:35.226 "ns_data": { 00:31:35.226 "id": 1, 00:31:35.226 "can_share": false 00:31:35.226 } 00:31:35.226 } 00:31:35.226 ], 00:31:35.226 "mp_policy": "active_passive" 00:31:35.226 } 00:31:35.226 } 00:31:35.226 ] 00:31:35.226 18:46:20 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:31:35.226 18:46:20 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:31:36.600 802a7e36-5cba-47cc-acd2-a86c2378c529 00:31:36.600 18:46:21 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:31:36.600 89fc2b59-e3c2-4fc0-b261-3c48afaefd34 00:31:36.600 18:46:22 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:31:36.600 18:46:22 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:31:36.600 18:46:22 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:36.600 18:46:22 compress_isal -- common/autotest_common.sh@899 -- # local i 00:31:36.600 18:46:22 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:36.600 18:46:22 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:36.600 18:46:22 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:36.858 18:46:22 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:31:37.116 [ 00:31:37.116 { 00:31:37.116 "name": "89fc2b59-e3c2-4fc0-b261-3c48afaefd34", 00:31:37.116 "aliases": [ 00:31:37.116 "lvs0/lv0" 00:31:37.116 ], 00:31:37.116 "product_name": "Logical Volume", 00:31:37.116 "block_size": 512, 00:31:37.116 "num_blocks": 204800, 00:31:37.116 "uuid": "89fc2b59-e3c2-4fc0-b261-3c48afaefd34", 00:31:37.116 "assigned_rate_limits": { 00:31:37.116 "rw_ios_per_sec": 0, 00:31:37.116 "rw_mbytes_per_sec": 0, 00:31:37.116 "r_mbytes_per_sec": 0, 00:31:37.116 "w_mbytes_per_sec": 0 00:31:37.116 }, 00:31:37.116 "claimed": false, 00:31:37.116 "zoned": false, 00:31:37.116 "supported_io_types": { 00:31:37.116 "read": true, 00:31:37.116 "write": true, 00:31:37.116 "unmap": true, 00:31:37.116 "flush": false, 00:31:37.116 "reset": true, 00:31:37.116 "nvme_admin": false, 00:31:37.116 "nvme_io": false, 00:31:37.116 "nvme_io_md": false, 00:31:37.116 "write_zeroes": true, 00:31:37.116 "zcopy": false, 00:31:37.116 "get_zone_info": false, 00:31:37.116 "zone_management": false, 00:31:37.116 "zone_append": false, 00:31:37.116 "compare": false, 00:31:37.116 "compare_and_write": false, 00:31:37.116 "abort": false, 00:31:37.116 "seek_hole": true, 00:31:37.116 "seek_data": true, 00:31:37.116 "copy": false, 00:31:37.116 "nvme_iov_md": false 00:31:37.116 }, 00:31:37.116 "driver_specific": { 00:31:37.116 "lvol": { 00:31:37.116 "lvol_store_uuid": "802a7e36-5cba-47cc-acd2-a86c2378c529", 00:31:37.116 "base_bdev": "Nvme0n1", 00:31:37.117 "thin_provision": true, 00:31:37.117 "num_allocated_clusters": 0, 00:31:37.117 "snapshot": false, 00:31:37.117 "clone": false, 00:31:37.117 "esnap_clone": false 00:31:37.117 } 00:31:37.117 } 00:31:37.117 } 00:31:37.117 ] 00:31:37.117 18:46:22 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:31:37.117 18:46:22 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:31:37.117 18:46:22 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:31:37.375 [2024-07-15 18:46:22.884814] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:31:37.375 COMP_lvs0/lv0 00:31:37.375 18:46:22 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:31:37.375 18:46:22 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:31:37.375 18:46:22 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:37.375 18:46:22 compress_isal -- common/autotest_common.sh@899 -- # local i 00:31:37.375 18:46:22 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:37.375 18:46:22 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:37.375 18:46:22 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:37.939 18:46:23 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:31:37.939 [ 00:31:37.939 { 00:31:37.939 "name": "COMP_lvs0/lv0", 00:31:37.939 "aliases": [ 00:31:37.939 "4c3547f0-dcd4-54f2-9947-8d3fa3c756db" 00:31:37.939 ], 00:31:37.939 "product_name": "compress", 00:31:37.939 "block_size": 512, 00:31:37.939 "num_blocks": 200704, 00:31:37.939 "uuid": "4c3547f0-dcd4-54f2-9947-8d3fa3c756db", 00:31:37.939 "assigned_rate_limits": { 00:31:37.939 "rw_ios_per_sec": 0, 00:31:37.939 "rw_mbytes_per_sec": 0, 00:31:37.939 "r_mbytes_per_sec": 0, 00:31:37.939 "w_mbytes_per_sec": 0 00:31:37.939 }, 00:31:37.939 "claimed": false, 00:31:37.939 "zoned": false, 00:31:37.939 "supported_io_types": { 00:31:37.939 "read": true, 00:31:37.939 "write": true, 00:31:37.939 "unmap": false, 00:31:37.939 "flush": false, 00:31:37.939 "reset": false, 00:31:37.939 "nvme_admin": false, 00:31:37.939 "nvme_io": false, 00:31:37.939 "nvme_io_md": false, 00:31:37.939 "write_zeroes": true, 00:31:37.939 "zcopy": false, 00:31:37.939 "get_zone_info": false, 00:31:37.939 "zone_management": false, 00:31:37.939 "zone_append": false, 00:31:37.939 "compare": false, 00:31:37.939 "compare_and_write": false, 00:31:37.939 "abort": false, 00:31:37.939 "seek_hole": false, 00:31:37.939 "seek_data": false, 00:31:37.939 "copy": false, 00:31:37.939 "nvme_iov_md": false 00:31:37.939 }, 00:31:37.939 "driver_specific": { 00:31:37.939 "compress": { 00:31:37.939 "name": "COMP_lvs0/lv0", 00:31:37.939 "base_bdev_name": "89fc2b59-e3c2-4fc0-b261-3c48afaefd34" 00:31:37.939 } 00:31:37.939 } 00:31:37.939 } 00:31:37.939 ] 00:31:37.939 18:46:23 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:31:37.939 18:46:23 compress_isal -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:31:38.195 I/O targets: 00:31:38.195 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:31:38.195 00:31:38.195 00:31:38.195 CUnit - A unit testing framework for C - Version 2.1-3 00:31:38.195 http://cunit.sourceforge.net/ 00:31:38.195 00:31:38.195 00:31:38.195 Suite: bdevio tests on: COMP_lvs0/lv0 00:31:38.195 Test: blockdev write read block ...passed 00:31:38.195 Test: blockdev write zeroes read block ...passed 00:31:38.195 Test: blockdev write zeroes read no split ...passed 00:31:38.195 Test: blockdev write zeroes read split ...passed 00:31:38.195 Test: blockdev write zeroes read split partial ...passed 00:31:38.195 Test: blockdev reset ...[2024-07-15 18:46:23.702423] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:31:38.195 passed 00:31:38.195 Test: blockdev write read 8 blocks ...passed 00:31:38.195 Test: blockdev write read size > 128k ...passed 00:31:38.195 Test: blockdev write read invalid size ...passed 00:31:38.195 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:38.195 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:38.195 Test: blockdev write read max offset ...passed 00:31:38.195 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:38.195 Test: blockdev writev readv 8 blocks ...passed 00:31:38.195 Test: blockdev writev readv 30 x 1block ...passed 00:31:38.195 Test: blockdev writev readv block ...passed 00:31:38.195 Test: blockdev writev readv size > 128k ...passed 00:31:38.195 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:38.195 Test: blockdev comparev and writev ...passed 00:31:38.195 Test: blockdev nvme passthru rw ...passed 00:31:38.195 Test: blockdev nvme passthru vendor specific ...passed 00:31:38.195 Test: blockdev nvme admin passthru ...passed 00:31:38.195 Test: blockdev copy ...passed 00:31:38.195 00:31:38.195 Run Summary: Type Total Ran Passed Failed Inactive 00:31:38.195 suites 1 1 n/a 0 0 00:31:38.195 tests 23 23 23 0 0 00:31:38.195 asserts 130 130 130 0 n/a 00:31:38.195 00:31:38.195 Elapsed time = 0.331 seconds 00:31:38.195 0 00:31:38.452 18:46:23 compress_isal -- compress/compress.sh@60 -- # destroy_vols 00:31:38.452 18:46:23 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:31:38.714 18:46:24 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:31:38.993 18:46:24 compress_isal -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:31:38.993 18:46:24 compress_isal -- compress/compress.sh@62 -- # killprocess 2976935 00:31:38.993 18:46:24 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 2976935 ']' 00:31:38.993 18:46:24 compress_isal -- common/autotest_common.sh@952 -- # kill -0 2976935 00:31:38.993 18:46:24 compress_isal -- common/autotest_common.sh@953 -- # uname 00:31:38.993 18:46:24 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:38.993 18:46:24 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2976935 00:31:38.993 18:46:24 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:38.993 18:46:24 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:38.993 18:46:24 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2976935' 00:31:38.993 killing process with pid 2976935 00:31:38.993 18:46:24 compress_isal -- common/autotest_common.sh@967 -- # kill 2976935 00:31:38.993 18:46:24 compress_isal -- common/autotest_common.sh@972 -- # wait 2976935 00:31:40.393 18:46:25 compress_isal -- compress/compress.sh@91 -- # '[' 0 -eq 1 ']' 00:31:40.393 18:46:25 compress_isal -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:31:40.393 00:31:40.393 real 0m49.260s 00:31:40.393 user 1m55.607s 00:31:40.393 sys 0m3.605s 00:31:40.393 18:46:25 compress_isal -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:40.393 18:46:25 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:31:40.393 ************************************ 00:31:40.393 END TEST compress_isal 00:31:40.393 ************************************ 00:31:40.393 18:46:25 -- common/autotest_common.sh@1142 -- # return 0 00:31:40.393 18:46:25 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:31:40.393 18:46:25 -- spdk/autotest.sh@356 -- # '[' 1 -eq 1 ']' 00:31:40.393 18:46:25 -- spdk/autotest.sh@357 -- # run_test blockdev_crypto_aesni /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:31:40.393 18:46:25 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:40.393 18:46:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:40.393 18:46:25 -- common/autotest_common.sh@10 -- # set +x 00:31:40.393 ************************************ 00:31:40.393 START TEST blockdev_crypto_aesni 00:31:40.393 ************************************ 00:31:40.393 18:46:25 blockdev_crypto_aesni -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:31:40.652 * Looking for test storage... 00:31:40.652 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:31:40.652 18:46:25 blockdev_crypto_aesni -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:31:40.652 18:46:25 blockdev_crypto_aesni -- bdev/nbd_common.sh@6 -- # set -e 00:31:40.652 18:46:25 blockdev_crypto_aesni -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:31:40.652 18:46:25 blockdev_crypto_aesni -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:31:40.652 18:46:25 blockdev_crypto_aesni -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:31:40.652 18:46:25 blockdev_crypto_aesni -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:31:40.652 18:46:25 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:31:40.652 18:46:25 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:31:40.652 18:46:25 blockdev_crypto_aesni -- bdev/blockdev.sh@20 -- # : 00:31:40.652 18:46:25 blockdev_crypto_aesni -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:31:40.652 18:46:25 blockdev_crypto_aesni -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:31:40.652 18:46:25 blockdev_crypto_aesni -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:31:40.652 18:46:25 blockdev_crypto_aesni -- bdev/blockdev.sh@674 -- # uname -s 00:31:40.652 18:46:25 blockdev_crypto_aesni -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:31:40.652 18:46:25 blockdev_crypto_aesni -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:31:40.652 18:46:25 blockdev_crypto_aesni -- bdev/blockdev.sh@682 -- # test_type=crypto_aesni 00:31:40.652 18:46:25 blockdev_crypto_aesni -- bdev/blockdev.sh@683 -- # crypto_device= 00:31:40.652 18:46:25 blockdev_crypto_aesni -- bdev/blockdev.sh@684 -- # dek= 00:31:40.652 18:46:25 blockdev_crypto_aesni -- bdev/blockdev.sh@685 -- # env_ctx= 00:31:40.652 18:46:25 blockdev_crypto_aesni -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:31:40.652 18:46:25 blockdev_crypto_aesni -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:31:40.652 18:46:25 blockdev_crypto_aesni -- bdev/blockdev.sh@690 -- # [[ crypto_aesni == bdev ]] 00:31:40.652 18:46:25 blockdev_crypto_aesni -- bdev/blockdev.sh@690 -- # [[ crypto_aesni == crypto_* ]] 00:31:40.652 18:46:25 blockdev_crypto_aesni -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:31:40.652 18:46:25 blockdev_crypto_aesni -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:31:40.652 18:46:25 blockdev_crypto_aesni -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=2978492 00:31:40.652 18:46:25 blockdev_crypto_aesni -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:40.652 18:46:25 blockdev_crypto_aesni -- bdev/blockdev.sh@49 -- # waitforlisten 2978492 00:31:40.652 18:46:25 blockdev_crypto_aesni -- common/autotest_common.sh@829 -- # '[' -z 2978492 ']' 00:31:40.652 18:46:25 blockdev_crypto_aesni -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:31:40.652 18:46:25 blockdev_crypto_aesni -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:40.652 18:46:25 blockdev_crypto_aesni -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:40.652 18:46:25 blockdev_crypto_aesni -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:40.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:40.652 18:46:25 blockdev_crypto_aesni -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:40.652 18:46:26 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:40.652 [2024-07-15 18:46:26.067334] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:31:40.652 [2024-07-15 18:46:26.067400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2978492 ] 00:31:40.652 [2024-07-15 18:46:26.165913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.910 [2024-07-15 18:46:26.263271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:41.476 18:46:27 blockdev_crypto_aesni -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:41.476 18:46:27 blockdev_crypto_aesni -- common/autotest_common.sh@862 -- # return 0 00:31:41.476 18:46:27 blockdev_crypto_aesni -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:31:41.476 18:46:27 blockdev_crypto_aesni -- bdev/blockdev.sh@705 -- # setup_crypto_aesni_conf 00:31:41.476 18:46:27 blockdev_crypto_aesni -- bdev/blockdev.sh@146 -- # rpc_cmd 00:31:41.476 18:46:27 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.476 18:46:27 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:41.476 [2024-07-15 18:46:27.025660] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:31:41.734 [2024-07-15 18:46:27.033696] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:31:41.734 [2024-07-15 18:46:27.041707] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:31:41.734 [2024-07-15 18:46:27.108914] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:31:44.310 true 00:31:44.310 true 00:31:44.310 true 00:31:44.310 true 00:31:44.310 Malloc0 00:31:44.310 Malloc1 00:31:44.310 Malloc2 00:31:44.310 Malloc3 00:31:44.310 [2024-07-15 18:46:29.452374] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:31:44.310 crypto_ram 00:31:44.310 [2024-07-15 18:46:29.460391] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:31:44.310 crypto_ram2 00:31:44.310 [2024-07-15 18:46:29.468418] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:31:44.310 crypto_ram3 00:31:44.310 [2024-07-15 18:46:29.476439] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:31:44.310 crypto_ram4 00:31:44.310 18:46:29 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.310 18:46:29 blockdev_crypto_aesni -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:31:44.310 18:46:29 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.310 18:46:29 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:44.310 18:46:29 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.310 18:46:29 blockdev_crypto_aesni -- bdev/blockdev.sh@740 -- # cat 00:31:44.310 18:46:29 blockdev_crypto_aesni -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:31:44.310 18:46:29 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.310 18:46:29 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:44.310 18:46:29 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.310 18:46:29 blockdev_crypto_aesni -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:31:44.310 18:46:29 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.310 18:46:29 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:44.310 18:46:29 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.310 18:46:29 blockdev_crypto_aesni -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:31:44.310 18:46:29 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.310 18:46:29 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:44.310 18:46:29 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.310 18:46:29 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:31:44.310 18:46:29 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:31:44.310 18:46:29 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:31:44.310 18:46:29 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.310 18:46:29 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:44.310 18:46:29 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.310 18:46:29 blockdev_crypto_aesni -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:31:44.310 18:46:29 blockdev_crypto_aesni -- bdev/blockdev.sh@749 -- # jq -r .name 00:31:44.310 18:46:29 blockdev_crypto_aesni -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "f407bafa-cdbe-5391-b454-5bf99a7850ec"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "f407bafa-cdbe-5391-b454-5bf99a7850ec",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "c03a1e16-1f0a-5541-be53-32bc33dbcd9f"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c03a1e16-1f0a-5541-be53-32bc33dbcd9f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "3cb969a4-c4d6-5f03-ab33-3bf1ad2299d9"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "3cb969a4-c4d6-5f03-ab33-3bf1ad2299d9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "ab6a949a-a0dc-570a-9c68-1a28d575d3a4"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "ab6a949a-a0dc-570a-9c68-1a28d575d3a4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:31:44.310 18:46:29 blockdev_crypto_aesni -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:31:44.310 18:46:29 blockdev_crypto_aesni -- bdev/blockdev.sh@752 -- # hello_world_bdev=crypto_ram 00:31:44.310 18:46:29 blockdev_crypto_aesni -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:31:44.310 18:46:29 blockdev_crypto_aesni -- bdev/blockdev.sh@754 -- # killprocess 2978492 00:31:44.310 18:46:29 blockdev_crypto_aesni -- common/autotest_common.sh@948 -- # '[' -z 2978492 ']' 00:31:44.310 18:46:29 blockdev_crypto_aesni -- common/autotest_common.sh@952 -- # kill -0 2978492 00:31:44.310 18:46:29 blockdev_crypto_aesni -- common/autotest_common.sh@953 -- # uname 00:31:44.310 18:46:29 blockdev_crypto_aesni -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:44.310 18:46:29 blockdev_crypto_aesni -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2978492 00:31:44.310 18:46:29 blockdev_crypto_aesni -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:44.310 18:46:29 blockdev_crypto_aesni -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:44.310 18:46:29 blockdev_crypto_aesni -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2978492' 00:31:44.310 killing process with pid 2978492 00:31:44.310 18:46:29 blockdev_crypto_aesni -- common/autotest_common.sh@967 -- # kill 2978492 00:31:44.310 18:46:29 blockdev_crypto_aesni -- common/autotest_common.sh@972 -- # wait 2978492 00:31:44.876 18:46:30 blockdev_crypto_aesni -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:44.876 18:46:30 blockdev_crypto_aesni -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:31:44.876 18:46:30 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:31:44.876 18:46:30 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:44.876 18:46:30 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:44.876 ************************************ 00:31:44.876 START TEST bdev_hello_world 00:31:44.876 ************************************ 00:31:44.876 18:46:30 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:31:44.876 [2024-07-15 18:46:30.244362] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:31:44.876 [2024-07-15 18:46:30.244423] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2979146 ] 00:31:44.876 [2024-07-15 18:46:30.340283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:45.133 [2024-07-15 18:46:30.431961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:45.133 [2024-07-15 18:46:30.453238] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:31:45.133 [2024-07-15 18:46:30.461265] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:31:45.133 [2024-07-15 18:46:30.469286] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:31:45.133 [2024-07-15 18:46:30.578038] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:31:47.659 [2024-07-15 18:46:32.774013] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:31:47.659 [2024-07-15 18:46:32.774078] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:31:47.659 [2024-07-15 18:46:32.774091] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:47.659 [2024-07-15 18:46:32.782032] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:31:47.659 [2024-07-15 18:46:32.782052] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:47.659 [2024-07-15 18:46:32.782060] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:47.660 [2024-07-15 18:46:32.790053] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:31:47.660 [2024-07-15 18:46:32.790070] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:31:47.660 [2024-07-15 18:46:32.790078] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:47.660 [2024-07-15 18:46:32.798073] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:31:47.660 [2024-07-15 18:46:32.798089] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:31:47.660 [2024-07-15 18:46:32.798098] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:47.660 [2024-07-15 18:46:32.870472] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:31:47.660 [2024-07-15 18:46:32.870511] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:31:47.660 [2024-07-15 18:46:32.870527] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:31:47.660 [2024-07-15 18:46:32.871847] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:31:47.660 [2024-07-15 18:46:32.871917] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:31:47.660 [2024-07-15 18:46:32.871932] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:31:47.660 [2024-07-15 18:46:32.871986] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:31:47.660 00:31:47.660 [2024-07-15 18:46:32.872004] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:31:47.916 00:31:47.916 real 0m3.022s 00:31:47.916 user 0m2.696s 00:31:47.916 sys 0m0.282s 00:31:47.916 18:46:33 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:47.916 18:46:33 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:31:47.916 ************************************ 00:31:47.916 END TEST bdev_hello_world 00:31:47.916 ************************************ 00:31:47.916 18:46:33 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:31:47.916 18:46:33 blockdev_crypto_aesni -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:31:47.916 18:46:33 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:47.916 18:46:33 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:47.916 18:46:33 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:47.916 ************************************ 00:31:47.916 START TEST bdev_bounds 00:31:47.916 ************************************ 00:31:47.916 18:46:33 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:31:47.916 18:46:33 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=2979587 00:31:47.916 18:46:33 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:31:47.916 18:46:33 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@289 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:31:47.916 18:46:33 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 2979587' 00:31:47.916 Process bdevio pid: 2979587 00:31:47.916 18:46:33 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 2979587 00:31:47.916 18:46:33 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 2979587 ']' 00:31:47.916 18:46:33 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:47.916 18:46:33 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:47.916 18:46:33 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:47.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:47.916 18:46:33 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:47.916 18:46:33 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:31:47.916 [2024-07-15 18:46:33.311748] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:31:47.916 [2024-07-15 18:46:33.311811] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2979587 ] 00:31:47.916 [2024-07-15 18:46:33.411505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:48.173 [2024-07-15 18:46:33.507362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:48.173 [2024-07-15 18:46:33.507465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:48.173 [2024-07-15 18:46:33.507467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:48.173 [2024-07-15 18:46:33.528858] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:31:48.173 [2024-07-15 18:46:33.536888] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:31:48.173 [2024-07-15 18:46:33.544906] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:31:48.173 [2024-07-15 18:46:33.649759] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:31:50.701 [2024-07-15 18:46:35.836373] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:31:50.701 [2024-07-15 18:46:35.836454] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:31:50.701 [2024-07-15 18:46:35.836467] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:50.701 [2024-07-15 18:46:35.844390] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:31:50.701 [2024-07-15 18:46:35.844408] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:50.701 [2024-07-15 18:46:35.844418] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:50.701 [2024-07-15 18:46:35.852411] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:31:50.701 [2024-07-15 18:46:35.852428] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:31:50.701 [2024-07-15 18:46:35.852436] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:50.701 [2024-07-15 18:46:35.860436] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:31:50.701 [2024-07-15 18:46:35.860453] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:31:50.701 [2024-07-15 18:46:35.860461] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:50.701 18:46:36 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:50.701 18:46:36 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:31:50.701 18:46:36 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:31:50.701 I/O targets: 00:31:50.701 crypto_ram: 65536 blocks of 512 bytes (32 MiB) 00:31:50.701 crypto_ram2: 65536 blocks of 512 bytes (32 MiB) 00:31:50.701 crypto_ram3: 8192 blocks of 4096 bytes (32 MiB) 00:31:50.701 crypto_ram4: 8192 blocks of 4096 bytes (32 MiB) 00:31:50.701 00:31:50.701 00:31:50.701 CUnit - A unit testing framework for C - Version 2.1-3 00:31:50.701 http://cunit.sourceforge.net/ 00:31:50.701 00:31:50.701 00:31:50.701 Suite: bdevio tests on: crypto_ram4 00:31:50.701 Test: blockdev write read block ...passed 00:31:50.701 Test: blockdev write zeroes read block ...passed 00:31:50.701 Test: blockdev write zeroes read no split ...passed 00:31:50.701 Test: blockdev write zeroes read split ...passed 00:31:50.701 Test: blockdev write zeroes read split partial ...passed 00:31:50.701 Test: blockdev reset ...passed 00:31:50.701 Test: blockdev write read 8 blocks ...passed 00:31:50.701 Test: blockdev write read size > 128k ...passed 00:31:50.701 Test: blockdev write read invalid size ...passed 00:31:50.701 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:50.701 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:50.701 Test: blockdev write read max offset ...passed 00:31:50.701 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:50.701 Test: blockdev writev readv 8 blocks ...passed 00:31:50.701 Test: blockdev writev readv 30 x 1block ...passed 00:31:50.701 Test: blockdev writev readv block ...passed 00:31:50.701 Test: blockdev writev readv size > 128k ...passed 00:31:50.701 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:50.701 Test: blockdev comparev and writev ...passed 00:31:50.701 Test: blockdev nvme passthru rw ...passed 00:31:50.701 Test: blockdev nvme passthru vendor specific ...passed 00:31:50.701 Test: blockdev nvme admin passthru ...passed 00:31:50.701 Test: blockdev copy ...passed 00:31:50.701 Suite: bdevio tests on: crypto_ram3 00:31:50.701 Test: blockdev write read block ...passed 00:31:50.701 Test: blockdev write zeroes read block ...passed 00:31:50.701 Test: blockdev write zeroes read no split ...passed 00:31:50.959 Test: blockdev write zeroes read split ...passed 00:31:50.960 Test: blockdev write zeroes read split partial ...passed 00:31:50.960 Test: blockdev reset ...passed 00:31:50.960 Test: blockdev write read 8 blocks ...passed 00:31:50.960 Test: blockdev write read size > 128k ...passed 00:31:50.960 Test: blockdev write read invalid size ...passed 00:31:50.960 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:50.960 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:50.960 Test: blockdev write read max offset ...passed 00:31:50.960 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:50.960 Test: blockdev writev readv 8 blocks ...passed 00:31:50.960 Test: blockdev writev readv 30 x 1block ...passed 00:31:50.960 Test: blockdev writev readv block ...passed 00:31:50.960 Test: blockdev writev readv size > 128k ...passed 00:31:50.960 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:50.960 Test: blockdev comparev and writev ...passed 00:31:50.960 Test: blockdev nvme passthru rw ...passed 00:31:50.960 Test: blockdev nvme passthru vendor specific ...passed 00:31:50.960 Test: blockdev nvme admin passthru ...passed 00:31:50.960 Test: blockdev copy ...passed 00:31:50.960 Suite: bdevio tests on: crypto_ram2 00:31:50.960 Test: blockdev write read block ...passed 00:31:50.960 Test: blockdev write zeroes read block ...passed 00:31:50.960 Test: blockdev write zeroes read no split ...passed 00:31:50.960 Test: blockdev write zeroes read split ...passed 00:31:51.218 Test: blockdev write zeroes read split partial ...passed 00:31:51.218 Test: blockdev reset ...passed 00:31:51.218 Test: blockdev write read 8 blocks ...passed 00:31:51.218 Test: blockdev write read size > 128k ...passed 00:31:51.218 Test: blockdev write read invalid size ...passed 00:31:51.218 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:51.218 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:51.218 Test: blockdev write read max offset ...passed 00:31:51.218 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:51.218 Test: blockdev writev readv 8 blocks ...passed 00:31:51.218 Test: blockdev writev readv 30 x 1block ...passed 00:31:51.219 Test: blockdev writev readv block ...passed 00:31:51.219 Test: blockdev writev readv size > 128k ...passed 00:31:51.219 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:51.219 Test: blockdev comparev and writev ...passed 00:31:51.219 Test: blockdev nvme passthru rw ...passed 00:31:51.219 Test: blockdev nvme passthru vendor specific ...passed 00:31:51.219 Test: blockdev nvme admin passthru ...passed 00:31:51.219 Test: blockdev copy ...passed 00:31:51.219 Suite: bdevio tests on: crypto_ram 00:31:51.219 Test: blockdev write read block ...passed 00:31:51.219 Test: blockdev write zeroes read block ...passed 00:31:51.219 Test: blockdev write zeroes read no split ...passed 00:31:51.477 Test: blockdev write zeroes read split ...passed 00:31:51.477 Test: blockdev write zeroes read split partial ...passed 00:31:51.477 Test: blockdev reset ...passed 00:31:51.477 Test: blockdev write read 8 blocks ...passed 00:31:51.477 Test: blockdev write read size > 128k ...passed 00:31:51.477 Test: blockdev write read invalid size ...passed 00:31:51.477 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:51.477 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:51.477 Test: blockdev write read max offset ...passed 00:31:51.477 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:51.477 Test: blockdev writev readv 8 blocks ...passed 00:31:51.477 Test: blockdev writev readv 30 x 1block ...passed 00:31:51.477 Test: blockdev writev readv block ...passed 00:31:51.477 Test: blockdev writev readv size > 128k ...passed 00:31:51.477 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:51.477 Test: blockdev comparev and writev ...passed 00:31:51.477 Test: blockdev nvme passthru rw ...passed 00:31:51.477 Test: blockdev nvme passthru vendor specific ...passed 00:31:51.477 Test: blockdev nvme admin passthru ...passed 00:31:51.477 Test: blockdev copy ...passed 00:31:51.477 00:31:51.477 Run Summary: Type Total Ran Passed Failed Inactive 00:31:51.477 suites 4 4 n/a 0 0 00:31:51.477 tests 92 92 92 0 0 00:31:51.477 asserts 520 520 520 0 n/a 00:31:51.477 00:31:51.477 Elapsed time = 1.627 seconds 00:31:51.477 0 00:31:51.477 18:46:36 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 2979587 00:31:51.477 18:46:36 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 2979587 ']' 00:31:51.477 18:46:36 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 2979587 00:31:51.477 18:46:36 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:31:51.477 18:46:36 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:51.477 18:46:36 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2979587 00:31:51.736 18:46:37 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:51.736 18:46:37 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:51.736 18:46:37 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2979587' 00:31:51.736 killing process with pid 2979587 00:31:51.736 18:46:37 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@967 -- # kill 2979587 00:31:51.736 18:46:37 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@972 -- # wait 2979587 00:31:51.994 18:46:37 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:31:51.994 00:31:51.994 real 0m4.122s 00:31:51.994 user 0m11.269s 00:31:51.994 sys 0m0.476s 00:31:51.994 18:46:37 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:51.994 18:46:37 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:31:51.994 ************************************ 00:31:51.994 END TEST bdev_bounds 00:31:51.994 ************************************ 00:31:51.994 18:46:37 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:31:51.994 18:46:37 blockdev_crypto_aesni -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:31:51.994 18:46:37 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:31:51.994 18:46:37 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:51.994 18:46:37 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:51.994 ************************************ 00:31:51.994 START TEST bdev_nbd 00:31:51.994 ************************************ 00:31:51.994 18:46:37 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:31:51.994 18:46:37 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:31:51.994 18:46:37 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:31:51.994 18:46:37 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:51.994 18:46:37 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:31:51.994 18:46:37 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:31:51.994 18:46:37 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:31:51.995 18:46:37 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=4 00:31:51.995 18:46:37 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:31:51.995 18:46:37 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:31:51.995 18:46:37 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:31:51.995 18:46:37 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=4 00:31:51.995 18:46:37 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:31:51.995 18:46:37 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:31:51.995 18:46:37 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:31:51.995 18:46:37 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:31:51.995 18:46:37 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=2980266 00:31:51.995 18:46:37 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:31:51.995 18:46:37 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:31:51.995 18:46:37 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 2980266 /var/tmp/spdk-nbd.sock 00:31:51.995 18:46:37 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 2980266 ']' 00:31:51.995 18:46:37 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:31:51.995 18:46:37 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:51.995 18:46:37 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:31:51.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:31:51.995 18:46:37 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:51.995 18:46:37 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:31:51.995 [2024-07-15 18:46:37.484527] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:31:51.995 [2024-07-15 18:46:37.484586] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:52.254 [2024-07-15 18:46:37.584880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.254 [2024-07-15 18:46:37.679922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:52.254 [2024-07-15 18:46:37.701257] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:31:52.254 [2024-07-15 18:46:37.709281] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:31:52.254 [2024-07-15 18:46:37.717301] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:31:52.512 [2024-07-15 18:46:37.823686] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:31:55.043 [2024-07-15 18:46:40.016026] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:31:55.043 [2024-07-15 18:46:40.016101] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:31:55.043 [2024-07-15 18:46:40.016116] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:55.043 [2024-07-15 18:46:40.024036] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:31:55.043 [2024-07-15 18:46:40.024056] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:55.043 [2024-07-15 18:46:40.024064] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:55.043 [2024-07-15 18:46:40.032058] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:31:55.043 [2024-07-15 18:46:40.032079] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:31:55.043 [2024-07-15 18:46:40.032091] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:55.043 [2024-07-15 18:46:40.040076] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:31:55.043 [2024-07-15 18:46:40.040092] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:31:55.043 [2024-07-15 18:46:40.040100] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:55.043 18:46:40 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:55.043 18:46:40 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:31:55.043 18:46:40 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:31:55.043 18:46:40 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:55.043 18:46:40 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:31:55.043 18:46:40 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:31:55.043 18:46:40 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:31:55.043 18:46:40 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:55.043 18:46:40 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:31:55.043 18:46:40 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:31:55.043 18:46:40 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:31:55.043 18:46:40 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:31:55.043 18:46:40 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:31:55.043 18:46:40 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:31:55.043 18:46:40 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:31:55.301 18:46:40 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:31:55.301 18:46:40 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:31:55.301 18:46:40 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:31:55.301 18:46:40 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:31:55.301 18:46:40 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:31:55.301 18:46:40 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:55.301 18:46:40 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:55.301 18:46:40 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:31:55.301 18:46:40 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:31:55.301 18:46:40 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:55.301 18:46:40 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:55.301 18:46:40 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:55.301 1+0 records in 00:31:55.301 1+0 records out 00:31:55.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025808 s, 15.9 MB/s 00:31:55.301 18:46:40 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:55.301 18:46:40 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:31:55.301 18:46:40 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:55.301 18:46:40 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:55.301 18:46:40 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:31:55.301 18:46:40 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:55.301 18:46:40 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:31:55.301 18:46:40 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 00:31:55.559 18:46:40 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:31:55.559 18:46:40 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:31:55.559 18:46:41 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:31:55.559 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:31:55.559 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:31:55.559 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:55.559 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:55.559 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:31:55.559 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:31:55.559 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:55.559 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:55.559 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:55.559 1+0 records in 00:31:55.559 1+0 records out 00:31:55.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273903 s, 15.0 MB/s 00:31:55.559 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:55.559 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:31:55.559 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:55.560 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:55.560 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:31:55.560 18:46:41 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:55.560 18:46:41 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:31:55.560 18:46:41 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:31:55.818 18:46:41 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:31:55.818 18:46:41 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:31:55.818 18:46:41 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:31:55.818 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:31:55.818 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:31:55.818 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:55.818 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:55.818 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:31:55.818 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:31:55.818 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:55.818 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:55.818 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:55.818 1+0 records in 00:31:55.818 1+0 records out 00:31:55.818 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266877 s, 15.3 MB/s 00:31:55.818 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:55.818 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:31:55.818 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:55.818 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:55.818 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:31:55.818 18:46:41 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:55.818 18:46:41 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:31:55.818 18:46:41 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 00:31:56.077 18:46:41 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:31:56.077 18:46:41 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:31:56.077 18:46:41 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:31:56.077 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:31:56.077 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:31:56.077 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:56.077 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:56.077 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:31:56.077 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:31:56.077 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:56.077 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:56.077 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:56.077 1+0 records in 00:31:56.077 1+0 records out 00:31:56.077 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294378 s, 13.9 MB/s 00:31:56.077 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:56.077 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:31:56.077 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:56.077 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:56.077 18:46:41 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:31:56.077 18:46:41 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:56.077 18:46:41 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:31:56.077 18:46:41 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:56.335 18:46:41 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:31:56.335 { 00:31:56.335 "nbd_device": "/dev/nbd0", 00:31:56.335 "bdev_name": "crypto_ram" 00:31:56.335 }, 00:31:56.335 { 00:31:56.335 "nbd_device": "/dev/nbd1", 00:31:56.335 "bdev_name": "crypto_ram2" 00:31:56.335 }, 00:31:56.335 { 00:31:56.335 "nbd_device": "/dev/nbd2", 00:31:56.335 "bdev_name": "crypto_ram3" 00:31:56.335 }, 00:31:56.335 { 00:31:56.335 "nbd_device": "/dev/nbd3", 00:31:56.335 "bdev_name": "crypto_ram4" 00:31:56.335 } 00:31:56.335 ]' 00:31:56.335 18:46:41 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:31:56.335 18:46:41 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:31:56.335 { 00:31:56.335 "nbd_device": "/dev/nbd0", 00:31:56.335 "bdev_name": "crypto_ram" 00:31:56.335 }, 00:31:56.335 { 00:31:56.335 "nbd_device": "/dev/nbd1", 00:31:56.335 "bdev_name": "crypto_ram2" 00:31:56.335 }, 00:31:56.335 { 00:31:56.335 "nbd_device": "/dev/nbd2", 00:31:56.335 "bdev_name": "crypto_ram3" 00:31:56.335 }, 00:31:56.335 { 00:31:56.335 "nbd_device": "/dev/nbd3", 00:31:56.335 "bdev_name": "crypto_ram4" 00:31:56.335 } 00:31:56.335 ]' 00:31:56.336 18:46:41 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:31:56.594 18:46:41 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3' 00:31:56.594 18:46:41 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:56.594 18:46:41 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3') 00:31:56.594 18:46:41 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:56.594 18:46:41 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:31:56.594 18:46:41 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:56.594 18:46:41 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:56.852 18:46:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:56.852 18:46:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:56.852 18:46:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:56.852 18:46:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:56.852 18:46:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:56.852 18:46:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:56.852 18:46:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:56.852 18:46:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:56.852 18:46:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:56.852 18:46:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:31:57.418 18:46:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:57.418 18:46:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:57.418 18:46:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:57.418 18:46:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:57.418 18:46:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:57.418 18:46:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:57.418 18:46:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:57.418 18:46:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:57.418 18:46:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:57.418 18:46:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:31:57.418 18:46:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:31:57.418 18:46:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:31:57.418 18:46:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:31:57.418 18:46:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:57.418 18:46:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:57.418 18:46:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:31:57.418 18:46:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:57.418 18:46:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:57.418 18:46:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:57.418 18:46:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:31:57.677 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:31:57.677 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:31:57.677 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:31:57.677 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:57.677 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:57.677 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:31:57.934 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:57.934 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:57.934 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:57.934 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:57.934 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:58.192 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:58.192 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:58.192 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:58.192 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:58.192 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:31:58.192 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:58.192 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:31:58.192 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:31:58.192 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:31:58.192 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:31:58.192 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:31:58.192 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:31:58.192 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:31:58.192 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:58.192 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:31:58.192 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:31:58.192 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:31:58.192 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:31:58.192 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:31:58.192 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:58.192 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:31:58.192 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:58.192 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:31:58.192 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:58.192 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:31:58.192 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:58.192 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:31:58.192 18:46:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:31:58.759 /dev/nbd0 00:31:58.759 18:46:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:58.759 18:46:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:58.759 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:31:58.759 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:31:58.759 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:58.759 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:58.759 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:31:58.759 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:31:58.759 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:58.759 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:58.759 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:58.759 1+0 records in 00:31:58.759 1+0 records out 00:31:58.759 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253577 s, 16.2 MB/s 00:31:58.759 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:58.759 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:31:58.759 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:58.759 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:58.759 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:31:58.759 18:46:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:58.759 18:46:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:31:58.759 18:46:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 /dev/nbd1 00:31:59.017 /dev/nbd1 00:31:59.017 18:46:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:59.017 18:46:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:59.017 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:31:59.017 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:31:59.017 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:59.017 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:59.017 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:31:59.017 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:31:59.017 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:59.017 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:59.017 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:59.017 1+0 records in 00:31:59.017 1+0 records out 00:31:59.017 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028844 s, 14.2 MB/s 00:31:59.017 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:59.017 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:31:59.017 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:59.017 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:59.017 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:31:59.017 18:46:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:59.017 18:46:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:31:59.017 18:46:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd10 00:31:59.277 /dev/nbd10 00:31:59.277 18:46:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:31:59.277 18:46:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:31:59.277 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:31:59.277 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:31:59.277 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:59.277 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:59.277 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:31:59.277 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:31:59.277 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:59.277 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:59.277 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:59.277 1+0 records in 00:31:59.277 1+0 records out 00:31:59.277 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241476 s, 17.0 MB/s 00:31:59.277 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:59.277 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:31:59.277 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:59.277 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:59.277 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:31:59.277 18:46:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:59.277 18:46:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:31:59.277 18:46:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 /dev/nbd11 00:31:59.576 /dev/nbd11 00:31:59.576 18:46:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:31:59.576 18:46:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:31:59.576 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:31:59.576 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:31:59.576 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:59.576 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:59.576 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:31:59.576 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:31:59.576 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:59.576 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:59.576 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:59.576 1+0 records in 00:31:59.576 1+0 records out 00:31:59.576 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303426 s, 13.5 MB/s 00:31:59.576 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:59.576 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:31:59.576 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:59.576 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:59.576 18:46:44 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:31:59.576 18:46:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:59.576 18:46:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:31:59.576 18:46:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:59.576 18:46:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:59.576 18:46:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:59.835 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:31:59.835 { 00:31:59.835 "nbd_device": "/dev/nbd0", 00:31:59.835 "bdev_name": "crypto_ram" 00:31:59.835 }, 00:31:59.835 { 00:31:59.835 "nbd_device": "/dev/nbd1", 00:31:59.835 "bdev_name": "crypto_ram2" 00:31:59.835 }, 00:31:59.835 { 00:31:59.835 "nbd_device": "/dev/nbd10", 00:31:59.835 "bdev_name": "crypto_ram3" 00:31:59.835 }, 00:31:59.836 { 00:31:59.836 "nbd_device": "/dev/nbd11", 00:31:59.836 "bdev_name": "crypto_ram4" 00:31:59.836 } 00:31:59.836 ]' 00:31:59.836 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:59.836 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:31:59.836 { 00:31:59.836 "nbd_device": "/dev/nbd0", 00:31:59.836 "bdev_name": "crypto_ram" 00:31:59.836 }, 00:31:59.836 { 00:31:59.836 "nbd_device": "/dev/nbd1", 00:31:59.836 "bdev_name": "crypto_ram2" 00:31:59.836 }, 00:31:59.836 { 00:31:59.836 "nbd_device": "/dev/nbd10", 00:31:59.836 "bdev_name": "crypto_ram3" 00:31:59.836 }, 00:31:59.836 { 00:31:59.836 "nbd_device": "/dev/nbd11", 00:31:59.836 "bdev_name": "crypto_ram4" 00:31:59.836 } 00:31:59.836 ]' 00:31:59.836 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:31:59.836 /dev/nbd1 00:31:59.836 /dev/nbd10 00:31:59.836 /dev/nbd11' 00:31:59.836 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:31:59.836 /dev/nbd1 00:31:59.836 /dev/nbd10 00:31:59.836 /dev/nbd11' 00:31:59.836 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:59.836 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=4 00:31:59.836 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 4 00:31:59.836 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=4 00:31:59.836 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 4 -ne 4 ']' 00:31:59.836 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' write 00:31:59.836 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:31:59.836 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:59.836 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:31:59.836 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:31:59.836 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:31:59.836 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:31:59.836 256+0 records in 00:31:59.836 256+0 records out 00:31:59.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103682 s, 101 MB/s 00:31:59.836 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:59.836 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:31:59.836 256+0 records in 00:31:59.836 256+0 records out 00:31:59.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0438884 s, 23.9 MB/s 00:31:59.836 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:59.836 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:31:59.836 256+0 records in 00:31:59.836 256+0 records out 00:31:59.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0486612 s, 21.5 MB/s 00:31:59.836 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:59.836 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:32:00.095 256+0 records in 00:32:00.095 256+0 records out 00:32:00.095 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0424192 s, 24.7 MB/s 00:32:00.095 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:00.095 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:32:00.095 256+0 records in 00:32:00.095 256+0 records out 00:32:00.095 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0395125 s, 26.5 MB/s 00:32:00.096 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' verify 00:32:00.096 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:32:00.096 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:32:00.096 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:32:00.096 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:32:00.096 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:32:00.096 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:32:00.096 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:00.096 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:32:00.096 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:00.096 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:32:00.096 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:00.096 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:32:00.096 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:00.096 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:32:00.096 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:32:00.096 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:32:00.096 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:00.096 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:32:00.096 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:00.096 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:32:00.096 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:00.096 18:46:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:00.663 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:00.663 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:00.663 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:00.663 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:00.663 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:00.663 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:00.663 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:00.663 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:00.663 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:00.663 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:32:00.923 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:00.923 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:00.923 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:00.923 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:00.923 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:00.923 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:00.923 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:00.923 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:00.923 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:00.923 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:32:01.182 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:32:01.182 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:32:01.182 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:32:01.182 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:01.182 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:01.182 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:32:01.182 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:01.182 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:01.182 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:01.182 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:32:01.441 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:32:01.441 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:32:01.441 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:32:01.441 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:01.441 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:01.441 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:32:01.441 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:01.441 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:01.441 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:01.441 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:01.441 18:46:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:01.700 18:46:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:32:01.700 18:46:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:32:01.700 18:46:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:01.700 18:46:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:32:01.700 18:46:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:01.700 18:46:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:32:01.700 18:46:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:32:01.700 18:46:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:32:01.700 18:46:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:32:01.700 18:46:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:32:01.700 18:46:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:32:01.700 18:46:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:32:01.700 18:46:47 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:32:01.700 18:46:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:01.700 18:46:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:32:01.700 18:46:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:32:01.700 18:46:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:32:01.700 18:46:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:32:02.268 malloc_lvol_verify 00:32:02.268 18:46:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:32:02.526 845f1a55-0ab1-43a0-9117-dbbb5f073b96 00:32:02.526 18:46:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:32:02.785 e50083fa-5ec3-4567-b14f-2b7d7b046cd9 00:32:02.785 18:46:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:32:03.043 /dev/nbd0 00:32:03.043 18:46:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:32:03.043 mke2fs 1.46.5 (30-Dec-2021) 00:32:03.043 Discarding device blocks: 0/4096 done 00:32:03.043 Creating filesystem with 4096 1k blocks and 1024 inodes 00:32:03.043 00:32:03.043 Allocating group tables: 0/1 done 00:32:03.043 Writing inode tables: 0/1 done 00:32:03.043 Creating journal (1024 blocks): done 00:32:03.043 Writing superblocks and filesystem accounting information: 0/1 done 00:32:03.043 00:32:03.043 18:46:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:32:03.043 18:46:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:32:03.043 18:46:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:03.043 18:46:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:03.043 18:46:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:03.043 18:46:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:32:03.043 18:46:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:03.043 18:46:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:03.302 18:46:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:03.302 18:46:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:03.302 18:46:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:03.302 18:46:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:03.302 18:46:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:03.302 18:46:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:03.302 18:46:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:03.302 18:46:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:03.302 18:46:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:32:03.302 18:46:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:32:03.302 18:46:48 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 2980266 00:32:03.302 18:46:48 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 2980266 ']' 00:32:03.302 18:46:48 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 2980266 00:32:03.302 18:46:48 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:32:03.302 18:46:48 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:03.302 18:46:48 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2980266 00:32:03.302 18:46:48 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:03.302 18:46:48 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:03.302 18:46:48 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2980266' 00:32:03.302 killing process with pid 2980266 00:32:03.302 18:46:48 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@967 -- # kill 2980266 00:32:03.302 18:46:48 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@972 -- # wait 2980266 00:32:03.561 18:46:49 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:32:03.561 00:32:03.561 real 0m11.589s 00:32:03.561 user 0m16.808s 00:32:03.561 sys 0m3.405s 00:32:03.561 18:46:49 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:03.561 18:46:49 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:32:03.561 ************************************ 00:32:03.561 END TEST bdev_nbd 00:32:03.561 ************************************ 00:32:03.561 18:46:49 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:32:03.561 18:46:49 blockdev_crypto_aesni -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:32:03.561 18:46:49 blockdev_crypto_aesni -- bdev/blockdev.sh@764 -- # '[' crypto_aesni = nvme ']' 00:32:03.561 18:46:49 blockdev_crypto_aesni -- bdev/blockdev.sh@764 -- # '[' crypto_aesni = gpt ']' 00:32:03.561 18:46:49 blockdev_crypto_aesni -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:32:03.561 18:46:49 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:03.561 18:46:49 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:03.561 18:46:49 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:32:03.561 ************************************ 00:32:03.561 START TEST bdev_fio 00:32:03.561 ************************************ 00:32:03.561 18:46:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:32:03.561 18:46:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:32:03.561 18:46:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:32:03.561 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:32:03.561 18:46:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:32:03.561 18:46:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:32:03.561 18:46:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:32:03.561 18:46:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:32:03.561 18:46:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:32:03.561 18:46:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:03.561 18:46:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:32:03.561 18:46:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:32:03.561 18:46:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:32:03.561 18:46:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:32:03.561 18:46:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:32:03.561 18:46:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:32:03.561 18:46:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:32:03.561 18:46:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:03.561 18:46:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:32:03.562 18:46:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:32:03.562 18:46:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:32:03.562 18:46:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:32:03.562 18:46:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:32:03.562 18:46:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:32:03.562 18:46:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:32:03.562 18:46:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:32:03.562 18:46:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram]' 00:32:03.562 18:46:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram 00:32:03.562 18:46:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:32:03.562 18:46:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram2]' 00:32:03.562 18:46:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram2 00:32:03.562 18:46:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:32:03.562 18:46:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram3]' 00:32:03.562 18:46:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram3 00:32:03.562 18:46:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:32:03.562 18:46:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram4]' 00:32:03.562 18:46:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram4 00:32:03.562 18:46:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:32:03.562 18:46:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:03.562 18:46:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:32:03.562 18:46:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:03.562 18:46:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:32:03.821 ************************************ 00:32:03.821 START TEST bdev_fio_rw_verify 00:32:03.821 ************************************ 00:32:03.821 18:46:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:03.821 18:46:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:03.821 18:46:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:03.821 18:46:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:03.821 18:46:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:03.821 18:46:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:03.821 18:46:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:32:03.821 18:46:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:03.821 18:46:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:03.821 18:46:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:03.821 18:46:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:32:03.821 18:46:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:03.821 18:46:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:03.821 18:46:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:03.821 18:46:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:03.821 18:46:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:03.821 18:46:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:03.822 18:46:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:03.822 18:46:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:03.822 18:46:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:03.822 18:46:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:03.822 18:46:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:04.080 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:04.080 job_crypto_ram2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:04.081 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:04.081 job_crypto_ram4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:04.081 fio-3.35 00:32:04.081 Starting 4 threads 00:32:18.971 00:32:18.972 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=2982710: Mon Jul 15 18:47:02 2024 00:32:18.972 read: IOPS=19.5k, BW=76.3MiB/s (80.0MB/s)(763MiB/10001msec) 00:32:18.972 slat (usec): min=18, max=354, avg=69.70, stdev=31.31 00:32:18.972 clat (usec): min=16, max=1772, avg=369.06, stdev=203.39 00:32:18.972 lat (usec): min=69, max=1959, avg=438.77, stdev=217.02 00:32:18.972 clat percentiles (usec): 00:32:18.972 | 50.000th=[ 338], 99.000th=[ 979], 99.900th=[ 1254], 99.990th=[ 1385], 00:32:18.972 | 99.999th=[ 1729] 00:32:18.972 write: IOPS=21.5k, BW=84.0MiB/s (88.1MB/s)(818MiB/9744msec); 0 zone resets 00:32:18.972 slat (usec): min=26, max=1062, avg=82.86, stdev=31.52 00:32:18.972 clat (usec): min=35, max=1855, avg=442.19, stdev=242.71 00:32:18.972 lat (usec): min=80, max=2029, avg=525.05, stdev=256.53 00:32:18.972 clat percentiles (usec): 00:32:18.972 | 50.000th=[ 412], 99.000th=[ 1205], 99.900th=[ 1663], 99.990th=[ 1778], 00:32:18.972 | 99.999th=[ 1844] 00:32:18.972 bw ( KiB/s): min=59656, max=114128, per=97.64%, avg=83959.58, stdev=3355.67, samples=76 00:32:18.972 iops : min=14914, max=28532, avg=20989.89, stdev=838.92, samples=76 00:32:18.972 lat (usec) : 20=0.01%, 50=0.01%, 100=3.76%, 250=24.52%, 500=41.54% 00:32:18.972 lat (usec) : 750=22.77%, 1000=5.45% 00:32:18.972 lat (msec) : 2=1.95% 00:32:18.972 cpu : usr=99.64%, sys=0.01%, ctx=43, majf=0, minf=234 00:32:18.972 IO depths : 1=9.7%, 2=25.7%, 4=51.4%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:18.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:18.972 complete : 0=0.0%, 4=88.7%, 8=11.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:18.972 issued rwts: total=195268,209470,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:18.972 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:18.972 00:32:18.972 Run status group 0 (all jobs): 00:32:18.972 READ: bw=76.3MiB/s (80.0MB/s), 76.3MiB/s-76.3MiB/s (80.0MB/s-80.0MB/s), io=763MiB (800MB), run=10001-10001msec 00:32:18.972 WRITE: bw=84.0MiB/s (88.1MB/s), 84.0MiB/s-84.0MiB/s (88.1MB/s-88.1MB/s), io=818MiB (858MB), run=9744-9744msec 00:32:18.972 00:32:18.972 real 0m13.396s 00:32:18.972 user 0m48.018s 00:32:18.972 sys 0m0.390s 00:32:18.972 18:47:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:18.972 18:47:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:32:18.972 ************************************ 00:32:18.972 END TEST bdev_fio_rw_verify 00:32:18.972 ************************************ 00:32:18.972 18:47:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:32:18.972 18:47:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:32:18.972 18:47:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:18.972 18:47:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:32:18.972 18:47:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:18.972 18:47:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:32:18.972 18:47:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:32:18.972 18:47:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:32:18.972 18:47:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:32:18.972 18:47:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:32:18.972 18:47:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:32:18.972 18:47:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:32:18.972 18:47:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:18.972 18:47:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:32:18.972 18:47:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:32:18.972 18:47:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:32:18.972 18:47:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:32:18.972 18:47:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:32:18.972 18:47:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "f407bafa-cdbe-5391-b454-5bf99a7850ec"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "f407bafa-cdbe-5391-b454-5bf99a7850ec",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "c03a1e16-1f0a-5541-be53-32bc33dbcd9f"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c03a1e16-1f0a-5541-be53-32bc33dbcd9f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "3cb969a4-c4d6-5f03-ab33-3bf1ad2299d9"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "3cb969a4-c4d6-5f03-ab33-3bf1ad2299d9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "ab6a949a-a0dc-570a-9c68-1a28d575d3a4"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "ab6a949a-a0dc-570a-9c68-1a28d575d3a4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:32:18.972 18:47:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n crypto_ram 00:32:18.972 crypto_ram2 00:32:18.972 crypto_ram3 00:32:18.972 crypto_ram4 ]] 00:32:18.972 18:47:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "f407bafa-cdbe-5391-b454-5bf99a7850ec"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "f407bafa-cdbe-5391-b454-5bf99a7850ec",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "c03a1e16-1f0a-5541-be53-32bc33dbcd9f"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c03a1e16-1f0a-5541-be53-32bc33dbcd9f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "3cb969a4-c4d6-5f03-ab33-3bf1ad2299d9"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "3cb969a4-c4d6-5f03-ab33-3bf1ad2299d9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "ab6a949a-a0dc-570a-9c68-1a28d575d3a4"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "ab6a949a-a0dc-570a-9c68-1a28d575d3a4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram]' 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram2]' 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram2 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram3]' 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram3 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram4]' 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram4 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:32:18.973 ************************************ 00:32:18.973 START TEST bdev_fio_trim 00:32:18.973 ************************************ 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:18.973 18:47:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:18.973 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:18.973 job_crypto_ram2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:18.973 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:18.973 job_crypto_ram4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:18.973 fio-3.35 00:32:18.973 Starting 4 threads 00:32:31.177 00:32:31.177 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=2984927: Mon Jul 15 18:47:15 2024 00:32:31.177 write: IOPS=34.5k, BW=135MiB/s (141MB/s)(1346MiB/10001msec); 0 zone resets 00:32:31.177 slat (usec): min=11, max=1437, avg=66.31, stdev=39.74 00:32:31.177 clat (usec): min=39, max=2084, avg=294.95, stdev=196.80 00:32:31.177 lat (usec): min=57, max=2140, avg=361.26, stdev=222.93 00:32:31.177 clat percentiles (usec): 00:32:31.177 | 50.000th=[ 243], 99.000th=[ 963], 99.900th=[ 1172], 99.990th=[ 1287], 00:32:31.177 | 99.999th=[ 1680] 00:32:31.177 bw ( KiB/s): min=117232, max=199880, per=100.00%, avg=138731.79, stdev=9746.29, samples=76 00:32:31.177 iops : min=29308, max=49970, avg=34682.95, stdev=2436.57, samples=76 00:32:31.177 trim: IOPS=34.5k, BW=135MiB/s (141MB/s)(1346MiB/10001msec); 0 zone resets 00:32:31.177 slat (usec): min=4, max=109, avg=18.10, stdev= 7.71 00:32:31.177 clat (usec): min=57, max=1909, avg=278.47, stdev=135.41 00:32:31.177 lat (usec): min=62, max=1944, avg=296.57, stdev=138.77 00:32:31.177 clat percentiles (usec): 00:32:31.177 | 50.000th=[ 253], 99.000th=[ 676], 99.900th=[ 791], 99.990th=[ 881], 00:32:31.177 | 99.999th=[ 1188] 00:32:31.177 bw ( KiB/s): min=117224, max=199896, per=100.00%, avg=138733.05, stdev=9746.51, samples=76 00:32:31.177 iops : min=29306, max=49974, avg=34683.26, stdev=2436.63, samples=76 00:32:31.177 lat (usec) : 50=0.59%, 100=6.53%, 250=43.46%, 500=38.79%, 750=8.52% 00:32:31.177 lat (usec) : 1000=1.76% 00:32:31.177 lat (msec) : 2=0.34%, 4=0.01% 00:32:31.177 cpu : usr=99.61%, sys=0.00%, ctx=107, majf=0, minf=97 00:32:31.177 IO depths : 1=8.0%, 2=26.3%, 4=52.5%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:31.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:31.177 complete : 0=0.0%, 4=88.4%, 8=11.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:31.177 issued rwts: total=0,344600,344601,0 short=0,0,0,0 dropped=0,0,0,0 00:32:31.177 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:31.177 00:32:31.177 Run status group 0 (all jobs): 00:32:31.177 WRITE: bw=135MiB/s (141MB/s), 135MiB/s-135MiB/s (141MB/s-141MB/s), io=1346MiB (1411MB), run=10001-10001msec 00:32:31.177 TRIM: bw=135MiB/s (141MB/s), 135MiB/s-135MiB/s (141MB/s-141MB/s), io=1346MiB (1411MB), run=10001-10001msec 00:32:31.177 00:32:31.177 real 0m13.410s 00:32:31.177 user 0m48.358s 00:32:31.177 sys 0m0.414s 00:32:31.177 18:47:16 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:31.177 18:47:16 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:32:31.177 ************************************ 00:32:31.177 END TEST bdev_fio_trim 00:32:31.177 ************************************ 00:32:31.177 18:47:16 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:32:31.177 18:47:16 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:32:31.177 18:47:16 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:31.177 18:47:16 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:32:31.177 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:32:31.177 18:47:16 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:32:31.177 00:32:31.177 real 0m27.070s 00:32:31.177 user 1m36.543s 00:32:31.177 sys 0m0.920s 00:32:31.177 18:47:16 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:31.177 18:47:16 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:32:31.177 ************************************ 00:32:31.177 END TEST bdev_fio 00:32:31.177 ************************************ 00:32:31.177 18:47:16 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:32:31.177 18:47:16 blockdev_crypto_aesni -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:31.177 18:47:16 blockdev_crypto_aesni -- bdev/blockdev.sh@777 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:32:31.177 18:47:16 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:32:31.177 18:47:16 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:31.177 18:47:16 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:32:31.177 ************************************ 00:32:31.177 START TEST bdev_verify 00:32:31.177 ************************************ 00:32:31.177 18:47:16 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:32:31.177 [2024-07-15 18:47:16.217689] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:32:31.177 [2024-07-15 18:47:16.217754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2986472 ] 00:32:31.177 [2024-07-15 18:47:16.318219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:31.177 [2024-07-15 18:47:16.410617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:31.177 [2024-07-15 18:47:16.410623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:31.177 [2024-07-15 18:47:16.432010] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:32:31.177 [2024-07-15 18:47:16.440037] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:32:31.177 [2024-07-15 18:47:16.448059] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:32:31.178 [2024-07-15 18:47:16.547535] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:32:33.706 [2024-07-15 18:47:18.735708] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:32:33.706 [2024-07-15 18:47:18.735776] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:33.706 [2024-07-15 18:47:18.735788] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:33.706 [2024-07-15 18:47:18.743725] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:32:33.706 [2024-07-15 18:47:18.743743] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:33.706 [2024-07-15 18:47:18.743752] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:33.706 [2024-07-15 18:47:18.751747] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:32:33.706 [2024-07-15 18:47:18.751766] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:32:33.706 [2024-07-15 18:47:18.751774] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:33.706 [2024-07-15 18:47:18.759772] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:32:33.706 [2024-07-15 18:47:18.759787] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:32:33.706 [2024-07-15 18:47:18.759795] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:33.706 Running I/O for 5 seconds... 00:32:38.994 00:32:38.994 Latency(us) 00:32:38.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:38.994 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:38.994 Verification LBA range: start 0x0 length 0x1000 00:32:38.994 crypto_ram : 5.06 432.85 1.69 0.00 0.00 294472.37 1685.21 178757.24 00:32:38.994 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:38.994 Verification LBA range: start 0x1000 length 0x1000 00:32:38.994 crypto_ram : 5.08 352.33 1.38 0.00 0.00 358653.60 5617.37 205720.62 00:32:38.994 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:38.994 Verification LBA range: start 0x0 length 0x1000 00:32:38.994 crypto_ram2 : 5.06 435.85 1.70 0.00 0.00 291730.38 842.61 163777.58 00:32:38.994 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:38.994 Verification LBA range: start 0x1000 length 0x1000 00:32:38.994 crypto_ram2 : 5.08 344.71 1.35 0.00 0.00 369402.28 6179.11 227690.79 00:32:38.994 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:38.994 Verification LBA range: start 0x0 length 0x1000 00:32:38.994 crypto_ram3 : 5.05 3394.43 13.26 0.00 0.00 37354.77 4369.07 29335.16 00:32:38.994 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:38.994 Verification LBA range: start 0x1000 length 0x1000 00:32:38.994 crypto_ram3 : 5.06 2633.22 10.29 0.00 0.00 48230.10 7146.54 48933.55 00:32:38.994 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:38.994 Verification LBA range: start 0x0 length 0x1000 00:32:38.994 crypto_ram4 : 5.05 3393.65 13.26 0.00 0.00 37265.53 4462.69 28835.84 00:32:38.994 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:38.994 Verification LBA range: start 0x1000 length 0x1000 00:32:38.994 crypto_ram4 : 5.06 2632.43 10.28 0.00 0.00 48082.22 7240.17 37449.14 00:32:38.994 =================================================================================================================== 00:32:38.994 Total : 13619.48 53.20 0.00 0.00 74638.54 842.61 227690.79 00:32:38.994 00:32:38.994 real 0m8.147s 00:32:38.994 user 0m15.550s 00:32:38.994 sys 0m0.303s 00:32:38.994 18:47:24 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:38.994 18:47:24 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:32:38.994 ************************************ 00:32:38.994 END TEST bdev_verify 00:32:38.994 ************************************ 00:32:38.994 18:47:24 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:32:38.994 18:47:24 blockdev_crypto_aesni -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:38.994 18:47:24 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:32:38.994 18:47:24 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:38.994 18:47:24 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:32:38.994 ************************************ 00:32:38.994 START TEST bdev_verify_big_io 00:32:38.994 ************************************ 00:32:38.994 18:47:24 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:38.994 [2024-07-15 18:47:24.443333] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:32:38.994 [2024-07-15 18:47:24.443441] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2987717 ] 00:32:39.254 [2024-07-15 18:47:24.580363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:39.254 [2024-07-15 18:47:24.677036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:39.254 [2024-07-15 18:47:24.677042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:39.254 [2024-07-15 18:47:24.698437] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:32:39.254 [2024-07-15 18:47:24.706463] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:32:39.254 [2024-07-15 18:47:24.714484] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:32:39.545 [2024-07-15 18:47:24.814377] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:32:42.075 [2024-07-15 18:47:27.001842] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:32:42.075 [2024-07-15 18:47:27.001914] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:42.075 [2024-07-15 18:47:27.001926] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:42.075 [2024-07-15 18:47:27.009860] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:32:42.075 [2024-07-15 18:47:27.009877] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:42.075 [2024-07-15 18:47:27.009886] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:42.075 [2024-07-15 18:47:27.017884] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:32:42.075 [2024-07-15 18:47:27.017903] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:32:42.075 [2024-07-15 18:47:27.017911] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:42.075 [2024-07-15 18:47:27.025920] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:32:42.075 [2024-07-15 18:47:27.025936] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:32:42.075 [2024-07-15 18:47:27.025944] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:42.075 Running I/O for 5 seconds... 00:32:42.643 [2024-07-15 18:47:28.118879] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:42.643 [2024-07-15 18:47:28.119157] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:42.643 [2024-07-15 18:47:28.119242] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:42.643 [2024-07-15 18:47:28.119304] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:42.643 [2024-07-15 18:47:28.121106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.121176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.121229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.121281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.121875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.121934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.121993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.122055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.124292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.124358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.124410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.124464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.125012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.125077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.125129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.125181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.126799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.126863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.126914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.126976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.127514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.127580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.127634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.127687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.129515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.129590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.129643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.129694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.130275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.130341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.130394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.130447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.132198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.132266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.132318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.132370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.132956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.133016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.133070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.133123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.135017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.135080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.135131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.135182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.135780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.135847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.135900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.135959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.137549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.137611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.137664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.137716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.138399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.138460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.138514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.138566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.140453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.140528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.140582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.140637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.141178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.141247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.141299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.141354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.143130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.143200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.143254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.143307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.144088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.144148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.144200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.144255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.145997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.146060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.146112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.146163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.146771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.146830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.146881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.146932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.148676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.148752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.148810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.643 [2024-07-15 18:47:28.148863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.149530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.149593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.149645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.149697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.151356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.151425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.151477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.151529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.152145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.152213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.152272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.152329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.153918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.153988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.154041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.154094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.154860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.154924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.154983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.155035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.156849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.156920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.156980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.157033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.157573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.157632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.157684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.157735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.159525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.159594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.159670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.159722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.160435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.160495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.160547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.160598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.162357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.162427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.162483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.162534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.163125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.163190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.163242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.163293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.165226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.165291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.165343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.165395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.165925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.165991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.166051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.166103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.167752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.167828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.167879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.167932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.168548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.168608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.168666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.168732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.170759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.170824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.170878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.170930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.171469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.171534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.171587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.171638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.173449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.173516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.173568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.173626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.174169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.174236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.174289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.174341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.176099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.176163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.176220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.176271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.176863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.176923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.176981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.177033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.178792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.178857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.178914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.178971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.179594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.179653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.179710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.179762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.181744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.181808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.181859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.181911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.182506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.644 [2024-07-15 18:47:28.182566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.645 [2024-07-15 18:47:28.182618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.645 [2024-07-15 18:47:28.182676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.645 [2024-07-15 18:47:28.184295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.645 [2024-07-15 18:47:28.184362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.645 [2024-07-15 18:47:28.184415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.645 [2024-07-15 18:47:28.184466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.645 [2024-07-15 18:47:28.185047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.645 [2024-07-15 18:47:28.185112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.645 [2024-07-15 18:47:28.185172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.645 [2024-07-15 18:47:28.185226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.645 [2024-07-15 18:47:28.187543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.645 [2024-07-15 18:47:28.187605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.645 [2024-07-15 18:47:28.187657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.645 [2024-07-15 18:47:28.187708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.645 [2024-07-15 18:47:28.188282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.645 [2024-07-15 18:47:28.188348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.645 [2024-07-15 18:47:28.188402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.645 [2024-07-15 18:47:28.188459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.645 [2024-07-15 18:47:28.190122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.645 [2024-07-15 18:47:28.190185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.645 [2024-07-15 18:47:28.190237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.645 [2024-07-15 18:47:28.190289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.645 [2024-07-15 18:47:28.190859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.645 [2024-07-15 18:47:28.190933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.645 [2024-07-15 18:47:28.190993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.645 [2024-07-15 18:47:28.191051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.904 [2024-07-15 18:47:28.193205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.904 [2024-07-15 18:47:28.193268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.904 [2024-07-15 18:47:28.193320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.904 [2024-07-15 18:47:28.193371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.904 [2024-07-15 18:47:28.194001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.904 [2024-07-15 18:47:28.194063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.904 [2024-07-15 18:47:28.194119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.904 [2024-07-15 18:47:28.194170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.904 [2024-07-15 18:47:28.195881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.904 [2024-07-15 18:47:28.195956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.904 [2024-07-15 18:47:28.196009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.904 [2024-07-15 18:47:28.196060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.904 [2024-07-15 18:47:28.196631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.904 [2024-07-15 18:47:28.196716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.904 [2024-07-15 18:47:28.196773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.904 [2024-07-15 18:47:28.196824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.904 [2024-07-15 18:47:28.198734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.904 [2024-07-15 18:47:28.198798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.904 [2024-07-15 18:47:28.198857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.904 [2024-07-15 18:47:28.198910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.904 [2024-07-15 18:47:28.199451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.904 [2024-07-15 18:47:28.199511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.904 [2024-07-15 18:47:28.199563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.199615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.201404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.201468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.201527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.201592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.202129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.202189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.202241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.202292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.203928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.203999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.204055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.204107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.204731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.204790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.204841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.204892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.206656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.208747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.210649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.211197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.211738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.213254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.215409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.217556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.221368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.221928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.223088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.225179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.227878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.229705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.231784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.233921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.238061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.240204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.242291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.244372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.246972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.248646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.249188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.251035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.254817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.256910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.259076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.260413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.262951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.265045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.267172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.268794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.271751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.272321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.274398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.276516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.278695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.280816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.282951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.285086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.288909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.291066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.292689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.294798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.297480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.298272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.298818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.300922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.304662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.306819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.308960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.309509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.312349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.314504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.316639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.318259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.320482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.321217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.323337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.325484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.327674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.329809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.331893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.333978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.338135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.340288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.342022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.344145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.346703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.347283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.348751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.350860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.354661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.356819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.358890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.359587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.361658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.362535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.364615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.905 [2024-07-15 18:47:28.365174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.367824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.368389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.368937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.369487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.370581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.371138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.371685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.372239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.374732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.375295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.375850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.376412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.377645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.378212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.378757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.379309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.381726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.382293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.382837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.383385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.384494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.385058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.385621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.386166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.388836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.389400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.389957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.390500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.391582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.392149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.392695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.393264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.395778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.396340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.396884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.397436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.398703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.399291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.399840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.400407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.402904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.403469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.404040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.404589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.405747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.406313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.406863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.407423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.409996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.410554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.411107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.411664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.412781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.413347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.413918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.414483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.416762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.417323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.417869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.418419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.419684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.420250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.420803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.421353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.425387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.427502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.428954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.430732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.431858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.433530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.434738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.435954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.440174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.441452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.443447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.445793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.447052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.449174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.451313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:42.906 [2024-07-15 18:47:28.453718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.166 [2024-07-15 18:47:28.457890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.166 [2024-07-15 18:47:28.458456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.166 [2024-07-15 18:47:28.459217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.166 [2024-07-15 18:47:28.461339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.166 [2024-07-15 18:47:28.464295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.166 [2024-07-15 18:47:28.465767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.166 [2024-07-15 18:47:28.467869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.166 [2024-07-15 18:47:28.469956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.166 [2024-07-15 18:47:28.474181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.166 [2024-07-15 18:47:28.476316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.166 [2024-07-15 18:47:28.478721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.166 [2024-07-15 18:47:28.480440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.166 [2024-07-15 18:47:28.483087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.166 [2024-07-15 18:47:28.484928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.166 [2024-07-15 18:47:28.485477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.166 [2024-07-15 18:47:28.486964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.166 [2024-07-15 18:47:28.490275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.166 [2024-07-15 18:47:28.492373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.166 [2024-07-15 18:47:28.494713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.166 [2024-07-15 18:47:28.496225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.166 [2024-07-15 18:47:28.498331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.500453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.502546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.504304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.507356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.507906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.509748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.511832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.514088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.516206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.518333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.520728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.524504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.526926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.528371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.530455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.533448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.534234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.534778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.536892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.540538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.542690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.545091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.545643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.548441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.550579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.552969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.554361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.556541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.557101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.559192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.561321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.563177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.565313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.567451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.569534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.573790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.576207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.577700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.579798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.582450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.583011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.584284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.586401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.590160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.592251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.594065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.594608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.597280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.599424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.601412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.603490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.605538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.607087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.609187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.611272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.613828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.616011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.618416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.619406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.623291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.624901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.626999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.629123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.630597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.631156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.633233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.635369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.639107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.641512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.642071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.642120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.642782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.643179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.645460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.647860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.649244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.651330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.653113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.653179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.653230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.653283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.653659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.655915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.655991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.656046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.656102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.657654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.657717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.657768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.657820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.658244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.658429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.658487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.658567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.658619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.660741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.660804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.660856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.167 [2024-07-15 18:47:28.660907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.661295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.661483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.661546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.661598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.661651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.663324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.663387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.663439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.663490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.663940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.664144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.664203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.664256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.664312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.666188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.666254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.666312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.666371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.666738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.666924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.666991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.667044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.667095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.668841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.668904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.668971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.669032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.669400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.669589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.669647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.669698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.669750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.671587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.671655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.671707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.671759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.672197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.672383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.672440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.672492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.672552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.674184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.674253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.674311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.674366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.674733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.674918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.674981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.675042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.675094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.677226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.677289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.677341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.677392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.677799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.678004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.678078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.678131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.678183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.679833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.679899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.679962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.680015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.680381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.680567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.680626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.680678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.680755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.682735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.682799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.682852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.682911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.683288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.683474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.683539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.683591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.683643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.685340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.685403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.685455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.685520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.685889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.686083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.686168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.686222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.686274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.688072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.688141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.688198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.688249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.688616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.688802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.688864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.688915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.688984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.690731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.690822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.690877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.690929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.168 [2024-07-15 18:47:28.691366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.691556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.691614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.691666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.691720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.693488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.693562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.693613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.693665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.694040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.694226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.694291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.694347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.694399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.696013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.696076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.696127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.696183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.696699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.696886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.696945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.697005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.697081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.698840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.698903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.698962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.699014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.699382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.699567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.699629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.699681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.699736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.701306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.701369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.701421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.701472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.702018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.702206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.702283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.702337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.702390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.704137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.704200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.704255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.704316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.704683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.704869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.704927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.704992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.705044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.706675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.706738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.706798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.706859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.707448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.707635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.707691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.707743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.707795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.709471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.709558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.709609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.709661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.710088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.710278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.710339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.710392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.710450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.712102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.712167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.712220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.712272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.712883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.713077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.713135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.713187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.713241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.714907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.714977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.715030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.715081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.715524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.715710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.715767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.715819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.169 [2024-07-15 18:47:28.715871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.717524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.717588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.717660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.717717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.718252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.718436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.718495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.718548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.718599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.720423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.720486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.720543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.720595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.721007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.721195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.721259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.721313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.721365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.723373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.723436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.723488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.723539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.723932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.724127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.724187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.724239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.724291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.725842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.725906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.725984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.726037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.726528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.726712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.726770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.726822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.726873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.729217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.729283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.729336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.429 [2024-07-15 18:47:28.729393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.729888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.730085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.730160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.730240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.730294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.732255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.732346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.732410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.732477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.732936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.733134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.733199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.733251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.733305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.735412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.735477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.735529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.735580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.736189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.736379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.736448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.736516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.736583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.738763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.738829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.738893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.738966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.739530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.739718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.739778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.739829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.739881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.742022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.742097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.742638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.743153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.743345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.743403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.743469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.743525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.745495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.746065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.746617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.747166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.747673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.747861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.748423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.748973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.749517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.752370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.752933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.753490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.754042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.754568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.755245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.755799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.756364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.756905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.759593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.760167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.760714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.761262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.761786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.762468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.763023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.763571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.764126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.766486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.767047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.767598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.768146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.768694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.769372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.769937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.770496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.771052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.773345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.773897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.774448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.775001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.775580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.776259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.776819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.777369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.777914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.780354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.780914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.781467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.782016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.782482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.783159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.783716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.784275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.784814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.787618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.790042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.790594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.791141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.791526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.430 [2024-07-15 18:47:28.792694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.793926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.794476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.795792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.798689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.800805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.803055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.804524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.804965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.807379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.807937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.808885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.810993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.814760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.816704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.818834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.819387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.820022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.822259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.824407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.826548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.828189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.830385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.831133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.833261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.835387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.835854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.837632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.839736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.841810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.843900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.848126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.850277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.851930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.854053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.854518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.856710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.857268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.858450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.860571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.864348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.866441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.868530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.869098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.869561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.871786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.873912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.876005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.878091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.880124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.881363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.883477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.885627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.886029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.888117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.890215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.892351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.893851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.897643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.899742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.901831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.903959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.904342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.906099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.906654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.908504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.910590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.914163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.916317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.917760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.918316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.918696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.920920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.923024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.924777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.926882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.928999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.930958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.933036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.935182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.935572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.937789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.939933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.942093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.942917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.946717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.948402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.950527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.952673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.953093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.954139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.954696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.956834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.958985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.962716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.964849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.965552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.966106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.966480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.431 [2024-07-15 18:47:28.968735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.432 [2024-07-15 18:47:28.970879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.432 [2024-07-15 18:47:28.972526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.432 [2024-07-15 18:47:28.974625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.432 [2024-07-15 18:47:28.976850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.432 [2024-07-15 18:47:28.978952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:28.981078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:28.983197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:28.983660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:28.985879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:28.988029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:28.990163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:28.990722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:28.994536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:28.996197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:28.998314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.000468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.000886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.001566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.002221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.004348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.006483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.010293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.012453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.013027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.013796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.014236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.016506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.018634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.020269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.022378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.024611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.026733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.028865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.031014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.031453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.033694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.035845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.037934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.038485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.042283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.043946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.046080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.048212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.048627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.049306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.050149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.052267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.054414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.058179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.060313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.060866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.061778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.062200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.064455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.066599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.068254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.070362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.072678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.074785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.076935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.079081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.079499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.081673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.083814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.085903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.086469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.090246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.091871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.093984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.096127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.096505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.097186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.098208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.100330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.102469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.106280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.108376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.108927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.110011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.110433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.112685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.114801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.116566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.118652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.121118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.123226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.125380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.127521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.127975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.130217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.132314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.134254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.134804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.138644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.140295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.142400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.144490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.144864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.145558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.146899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.149006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.151143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.692 [2024-07-15 18:47:29.153749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.154312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.155190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.156630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.157115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.158121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.158675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.160769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.162854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.165094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.165652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.166215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.166779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.167331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.168012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.168567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.169124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.169667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.172388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.172954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.173505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.173565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.174120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.174791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.175354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.175902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.176451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.179310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.179382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.179434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.179489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.180108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.180796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.180868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.180935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.180999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.182901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.182985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.183051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.183116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.183636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.183823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.183882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.183937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.183999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.186080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.186144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.186196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.186249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.186794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.186998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.187078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.187132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.187198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.189066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.189130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.189182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.189247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.189802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.190003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.190074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.190143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.190209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.192284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.192361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.192427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.192481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.193085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.193282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.193345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.193424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.193488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.195564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.195629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.195685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.195738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.196305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.196500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.196594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.196648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.196714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.198680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.198763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.198827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.198894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.199380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.199572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.199644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.199699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.199751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.202021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.202098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.202163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.202229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.202757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.202958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.203019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.203099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.203169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.693 [2024-07-15 18:47:29.205243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.205309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.205363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.205416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.205978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.206170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.206242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.206306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.206373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.208289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.208354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.208418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.208490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.209008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.209199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.209275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.209340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.209394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.211584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.211650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.211716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.211785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.212368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.212560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.212618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.212695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.212760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.214841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.214910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.214971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.215030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.215490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.215678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.215736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.215789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.215853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.217976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.218042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.218097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.218150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.218630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.218822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.218897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.218970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.219035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.220774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.220838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.220891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.220964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.221493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.221685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.221742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.221795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.221849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.223883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.223956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.224010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.224062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.224520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.224711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.224773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.224828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.224891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.227154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.227218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.227270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.227324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.227843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.228040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.228100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.228152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.228217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.230495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.230572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.230625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.230677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.231168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.231359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.231455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.231510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.231561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.233261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.233325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.233384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.233447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.233814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.234012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.234070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.234123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.234175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.235924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.236001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.236054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.236129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.236498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.236683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.236754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.236809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.236860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.238575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.694 [2024-07-15 18:47:29.238638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.695 [2024-07-15 18:47:29.238690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.695 [2024-07-15 18:47:29.238742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.695 [2024-07-15 18:47:29.239283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.695 [2024-07-15 18:47:29.239470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.695 [2024-07-15 18:47:29.239527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.695 [2024-07-15 18:47:29.239579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.695 [2024-07-15 18:47:29.239630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.241541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.241606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.241659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.241710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.242151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.242342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.242406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.242458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.242522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.244561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.244633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.244691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.244743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.245172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.245363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.245420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.245473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.245524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.247275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.247340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.247392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.247476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.247844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.248041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.248113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.248168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.248220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.249935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.250012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.250070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.250129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.250495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.250680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.250737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.250789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.250840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.252634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.252698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.252756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.252817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.253198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.253386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.253462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.253517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.253577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.255301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.255373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.255427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.255483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.954 [2024-07-15 18:47:29.255849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.256046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.256105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.256157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.256209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.258001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.258066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.258133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.258187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.258574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.258781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.258846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.258898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.258959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.260673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.260743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.260807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.260862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.261238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.261425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.261482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.261534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.261585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.263325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.263389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.263471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.263532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.263940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.264142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.264206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.264263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.264314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.266001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.266071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.266129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.266181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.266547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.266733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.266790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.266842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.266901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.268628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.268700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.268756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.268808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.269243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.269434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.269492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.269544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.269596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.271341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.271410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.271462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.273856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.274383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.274572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.274635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.274687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.274738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.276442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.277015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.278175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.280253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.280692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.280874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.282944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.285030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.287122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.290270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.292393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.294522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.296623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.297001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.299202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.301631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.302917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.303466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.307226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.309326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.311381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.313809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.314323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.314999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.316689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.318798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.321030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.324986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.326167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.326716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.328675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.329087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.331418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.333009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.335101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.337222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.340812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.342904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.345316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.346707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.347087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.349365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.351761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.352354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.352904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.355848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.357970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.360101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.362499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.363079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.363746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.365837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.367970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.370365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.374549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.375118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.375672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.377765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.378202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.380741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.382115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.384211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.386322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.390375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.392521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.394932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.396313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.396723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.398997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.401082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.401632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.402480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.405498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.955 [2024-07-15 18:47:29.407608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.409758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.411834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.412445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.413347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.415469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.417622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.419875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.423661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.424227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.425347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.427450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.427886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.430318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.432186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.434266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.436692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.440779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.442920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.445244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.447009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.447433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.449621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.451395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.451943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.453227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.456492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.458628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.460942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.462458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.462998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.464402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.466519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.468649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.470628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.473828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.474396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.475927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.478028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.478479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.480666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.482760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.484899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.487313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.491228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.493318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.495129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.497207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.497579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.500131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.501143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.501694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:43.956 [2024-07-15 18:47:29.503760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.507401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.509537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.511939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.512737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.513200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.515402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.517495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.519887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.521260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.523639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.524203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.526287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.528428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.528804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.530317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.532426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.534563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.536854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.540902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.543297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.544686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.546789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.547214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.549688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.550258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.550882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.552993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.556805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.558942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.561156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.561703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.562263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.562935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.563498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.564051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.565655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.568952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.569507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.570888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.572995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.573412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.574500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.576614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.578716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.579329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.581678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.214 [2024-07-15 18:47:29.582250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.582800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.583361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.583928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.584603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.585185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.585736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.586289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.589158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.589733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.590288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.590835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.591393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.592074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.592625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.593178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.593727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.596412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.596985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.597536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.598084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.598552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.599234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.599790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.600353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.600916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.603260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.603818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.604375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.604927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.605557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.606241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.606799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.607365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.607917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.611620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.612192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.612738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.614164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.614719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.616936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.617496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.618063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.620134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.622533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.624641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.626219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.626994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.627574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.628578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.630009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.632099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.632645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.636639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.637209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.637762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.639843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.640452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.642064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.642616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.643171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.645054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.647458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.649240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.651100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.651653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.652208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.653406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.654619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.656713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.657275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.661253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.661807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.662355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.664456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.664996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.666358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.666910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.667513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.669250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.672071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.673416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.675505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.676059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.676433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.678282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.680155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.682241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.684334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.688131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.690223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.692236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.694324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.694700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.695378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.697387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.698772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.699718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.702259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.704398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.706798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.709193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.709781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.712062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.714490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.716842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.717395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.721529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.722654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.725003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.725062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.725433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.727412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.727973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.729239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.731329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.735045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.735122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.735175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.735226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.735597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.736569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.736635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.736691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.736748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.738581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.738645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.738697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.738756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.739131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.739320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.739384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.739436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.739501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.741140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.741211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.741264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.741317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.215 [2024-07-15 18:47:29.741880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.742085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.742145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.742197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.742249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.743916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.743986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.744039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.744090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.744658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.744846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.744903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.744961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.745013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.746678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.746742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.746817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.746870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.747394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.747581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.747646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.747700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.747751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.749382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.749445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.749505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.749560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.749999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.750188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.750245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.750303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.750357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.752096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.752160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.752212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.752266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.752885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.753084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.753145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.753197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.753249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.754939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.755013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.755065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.755117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.755486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.755674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.755731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.755783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.755834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.757635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.757720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.757774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.757827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.758294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.758481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.758543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.758595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.758660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.760323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.760390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.760441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.760493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.760944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.761141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.761198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.761250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.761323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.763307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.763371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.763426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.763478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.763846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.764042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.764106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.764159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.216 [2024-07-15 18:47:29.764212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.476 [2024-07-15 18:47:29.766039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.476 [2024-07-15 18:47:29.766101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.476 [2024-07-15 18:47:29.766153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.476 [2024-07-15 18:47:29.766210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.476 [2024-07-15 18:47:29.766583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.476 [2024-07-15 18:47:29.766769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.476 [2024-07-15 18:47:29.766826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.476 [2024-07-15 18:47:29.766879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.476 [2024-07-15 18:47:29.766939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.476 [2024-07-15 18:47:29.768839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.476 [2024-07-15 18:47:29.768905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.476 [2024-07-15 18:47:29.768967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.476 [2024-07-15 18:47:29.769020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.476 [2024-07-15 18:47:29.769445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.476 [2024-07-15 18:47:29.769632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.476 [2024-07-15 18:47:29.769694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.476 [2024-07-15 18:47:29.769746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.476 [2024-07-15 18:47:29.769799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.476 [2024-07-15 18:47:29.771439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.476 [2024-07-15 18:47:29.771508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.476 [2024-07-15 18:47:29.771560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.476 [2024-07-15 18:47:29.771611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.476 [2024-07-15 18:47:29.771986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.476 [2024-07-15 18:47:29.772177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.476 [2024-07-15 18:47:29.772234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.476 [2024-07-15 18:47:29.772294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.476 [2024-07-15 18:47:29.772349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.476 [2024-07-15 18:47:29.774573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.476 [2024-07-15 18:47:29.774636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.774688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.774739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.775202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.775402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.775463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.775516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.775567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.777392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.777455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.777512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.777564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.777936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.778129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.778187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.778239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.778291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.780131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.780197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.780253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.780304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.780673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.780861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.780921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.780980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.781055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.782767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.782831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.782883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.782935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.783309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.783497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.783561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.783615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.783669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.785848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.785911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.785970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.786023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.786390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.786577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.786639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.786691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.786743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.788510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.788572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.788624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.788683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.789056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.789244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.789301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.789372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.789426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.791282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.791368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.791420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.791472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.791841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.792039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.792101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.792162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.792216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.793956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.794020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.794072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.794123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.794653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.794844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.794901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.794960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.795015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.796818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.796881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.796933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.796991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.797359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.797547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.797605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.797658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.797710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.799352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.799415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.799467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.799527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.800104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.800295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.800351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.800428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.800482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.802256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.802327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.802382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.802433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.802877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.803072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.803148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.803204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.803256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.804902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.804974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.477 [2024-07-15 18:47:29.805029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.805081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.805661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.805845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.805902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.805961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.806016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.807684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.807747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.807799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.807851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.808375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.808568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.808624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.808676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.808728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.810417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.810482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.810534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.810585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.811119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.811306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.811363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.811416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.811472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.813109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.813180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.813233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.813284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.813656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.813842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.813911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.813984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.814037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.815837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.815905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.815964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.816028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.816631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.816816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.816872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.816925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.816982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.818679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.818743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.818795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.818853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.819231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.819417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.819474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.819526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.819577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.821366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.821431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.821483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.822952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.823365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.823552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.823609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.823687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.823742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.825426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.827852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.830169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.830712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.831321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.831519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.833637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.836024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.838300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.842074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.842631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.843719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.845851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.846231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.848433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.850315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.852398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.854786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.859121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.861506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.863590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.865448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.865907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.868419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.869855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.870403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.871985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.875576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.877845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.880250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.881193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.881705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.883622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.885720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.888121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.889310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.891622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.892176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.478 [2024-07-15 18:47:29.894254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.896665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.897044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.898298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.900423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.902823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.904906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.909369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.911775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.912995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.915103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.915477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.917677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.918233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.919199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.921314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.925114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.927392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.928972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.929525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.929969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.932193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.934497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.935879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.937995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.940419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.940989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.941537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.943624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.944004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.946072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.948145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.950273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.951999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.956425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.957356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.959422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.961791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.962359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.963045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.964535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.965365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.967447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.969852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.970417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.970969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.971516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.972057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.972728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.973289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.973835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.974383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.977002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.977564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.978112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.978655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.979149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.979825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.980386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.980930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.981473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.984172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.984730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.985283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.985834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.986265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.986934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.987504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.988052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.988600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.991449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.992012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.992558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.993122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.993609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.994293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.994844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.995400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.995939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.998690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.999271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:29.999815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:30.000365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:30.000958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:30.001624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:30.002179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:30.002734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:30.003298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:30.005839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:30.006407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:30.006965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:30.007510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:30.007979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:30.008651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:30.009207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:30.009757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:30.010313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:30.012837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:30.013405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.479 [2024-07-15 18:47:30.013957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.480 [2024-07-15 18:47:30.014507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.480 [2024-07-15 18:47:30.015054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.480 [2024-07-15 18:47:30.015725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.480 [2024-07-15 18:47:30.016286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.480 [2024-07-15 18:47:30.016834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.480 [2024-07-15 18:47:30.017383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.480 [2024-07-15 18:47:30.020124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.480 [2024-07-15 18:47:30.020680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.480 [2024-07-15 18:47:30.021229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.480 [2024-07-15 18:47:30.022925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.480 [2024-07-15 18:47:30.023303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.480 [2024-07-15 18:47:30.024308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.739 [2024-07-15 18:47:30.026714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.739 [2024-07-15 18:47:30.027438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.739 [2024-07-15 18:47:30.027986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.739 [2024-07-15 18:47:30.031452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.739 [2024-07-15 18:47:30.033872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.739 [2024-07-15 18:47:30.034439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.739 [2024-07-15 18:47:30.035013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.739 [2024-07-15 18:47:30.035393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.739 [2024-07-15 18:47:30.036625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.739 [2024-07-15 18:47:30.037791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.739 [2024-07-15 18:47:30.038342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.739 [2024-07-15 18:47:30.039451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.739 [2024-07-15 18:47:30.042843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.739 [2024-07-15 18:47:30.043407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.739 [2024-07-15 18:47:30.044928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.739 [2024-07-15 18:47:30.046908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.739 [2024-07-15 18:47:30.047389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.739 [2024-07-15 18:47:30.049507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.739 [2024-07-15 18:47:30.050481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.051030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.052843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.056917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.058854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.059403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.060486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.060923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.063280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.065403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.067085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.069189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.072358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.074484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.076708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.078385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.078761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.081215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.083624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.084422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.084969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.087809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.089917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.092313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.094713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.095307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.095990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.098097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.100495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.102889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.106850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.107414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.108237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.110333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.110708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.113182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.114664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.116769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.119022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.123306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.125713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.127797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.129765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.130165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.132696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.133953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.134499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.136339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.140094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.142515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.144905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.145536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.146075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.148286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.150709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.153118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.154213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.156398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.156961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.159059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.161415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.161790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.163021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.165149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.167514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.169450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.174041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.176289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.177881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.179988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.180361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.182203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.182776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.184178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.186287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.189954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.192371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.193469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.194021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.194402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.196630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.199037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.200280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.202381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.204695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.206785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.209176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.211553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.212120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.213766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.216160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.218542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.219306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.223453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.224601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.226709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.229142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.229519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.230199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.230750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.740 [2024-07-15 18:47:30.232853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.235263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.239498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.241588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.242143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.243015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.243445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.245971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.248177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.249869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.251975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.255302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.257424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.259669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.259727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.260170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.262379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.264762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.267158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.267798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.271776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.271846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.271898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.271955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.272520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.274785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.274851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.274903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.274968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.276988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.277051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.277104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.277156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.277527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.277718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.277779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.277831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.277883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.279725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.279788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.279845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.279905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.280283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.280471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.280529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.280580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.280633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.282555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.282624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.282675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.282727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.283146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.283335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.283392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.283444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.283504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.285146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.285211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.285262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.285314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.285684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.285874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.285945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.286010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.286062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.288258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.288323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.288375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.288433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.288803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.289013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.289073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.289125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.741 [2024-07-15 18:47:30.289176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.999 [2024-07-15 18:47:30.291069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.999 [2024-07-15 18:47:30.291133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.999 [2024-07-15 18:47:30.291187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.999 [2024-07-15 18:47:30.291249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.999 [2024-07-15 18:47:30.291622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.999 [2024-07-15 18:47:30.291812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.999 [2024-07-15 18:47:30.291869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.999 [2024-07-15 18:47:30.291921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.999 [2024-07-15 18:47:30.291980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.999 [2024-07-15 18:47:30.293874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.999 [2024-07-15 18:47:30.293944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:44.999 [2024-07-15 18:47:30.294005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.294056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.294426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.294615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.294678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.294736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.294789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.296485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.296550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.296602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.296654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.297033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.297225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.297288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.297342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.297396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.299504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.299569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.299621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.299689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.300068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.300255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.300313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.300364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.300416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.302264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.302351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.302403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.302454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.302888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.303095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.303155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.303206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.303258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.305052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.305115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.305167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.305220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.305591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.305789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.305847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.305900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.305959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.307644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.307708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.307760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.307824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.308448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.308640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.308704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.308759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.308811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.310606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.310679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.310731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.310783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.311244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.311438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.311500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.311551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.311602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.313154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.313233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.313287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.313340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.313903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.314098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.314159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.314212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.000 [2024-07-15 18:47:30.314265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.315971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.316037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.316089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.316141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.316706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.316894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.316959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.317017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.317069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.322049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.322116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.322169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.322220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.322631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.322816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.322905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.322989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.323044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.327494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.327561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.327613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.327665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.328234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.328430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.328491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.328543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.328595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.332866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.332941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.333003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.333056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.333428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.333620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.333678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.333744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.333800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.337762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.337837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.337889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.337941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.338507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.338700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.338758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.338810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.338861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.342703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.342770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.342831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.342890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.343402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.343594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.343656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.343709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.343760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.346813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.346880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.346934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.346995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.347575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.347766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.347834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.347899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.347973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.350978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.351054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.351107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.351161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.351735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.351932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.001 [2024-07-15 18:47:30.352010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.352098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.352173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.355187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.355255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.355309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.355362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.355933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.356130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.356201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.356261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.356336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.359324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.359391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.359443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.359497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.360042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.360233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.360308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.360364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.360451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.363471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.363537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.363590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.363642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.364190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.364378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.364448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.364502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.364569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.367581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.367648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.367700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.367751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.368300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.368487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.368563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.368628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.368696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.371729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.371796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.371848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.371900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.372441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.372627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.372704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.372769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.002 [2024-07-15 18:47:30.372837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.578203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.583691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.583753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.585224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.586579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.587014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.588942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.589004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.590687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.590739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.591647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.592954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.593247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.593261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.593271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.598187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.598612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.599032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.600578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.603019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.604908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.606637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.607831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.608182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.608195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.608206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.614625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.615060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.615479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.615894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.618166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.619844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.621571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.623274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.623562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.623575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.623586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.626009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.627747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.629701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.631514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.632379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.632805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.633398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.635349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.635722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.635734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.262 [2024-07-15 18:47:30.635744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.638555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.639147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.640805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.642172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.644451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.646193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.647305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.647724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.648043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.648058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.648068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.652647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.654428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.655255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.656685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.659000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.660719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.662449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.663815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.664225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.664240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.664251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.669314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.671276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.673004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.673866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.675666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.677626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.679342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.681074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.681362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.681375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.681386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.684561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.686470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.688473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.690226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.691841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.693239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.695192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.696893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.697184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.697197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.697208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.702273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.703630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.705575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.705626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.707722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.708752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.709980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.710029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.710333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.710346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.710358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.715344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.715404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.715816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.715857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.718288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.718345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.719684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.719731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.720019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.720033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.720043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.725117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.725173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.726923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.726975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.729059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.729115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.729807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.729865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.730405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.730419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.730430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.735460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.735516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.737231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.737276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.738097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.738153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.740086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.740136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.740421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.740438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.740449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.745584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.745638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.746077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.746120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.747848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.747904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.749805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.749854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.750143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.263 [2024-07-15 18:47:30.750157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.750167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.755259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.755315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.757028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.757072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.758756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.759044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.759570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.759620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.760041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.760083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.760491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.760775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.760788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.760798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.760809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.765993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.766048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.767478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.767538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.767888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.769880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.769934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.771875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.771932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.772220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.772233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.772244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.772254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.776592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.776652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.778340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.778385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.778669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.780502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.780556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.782143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.782188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.782563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.782576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.782586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.782597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.789139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.789197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.789619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.789661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.790157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.790687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.790738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.792378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.792429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.792777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.792790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.792800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.792810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.798231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.798288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.799652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.799696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.799978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.801937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.801995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.803717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.803761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.804119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.804133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.804144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.804155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.807186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.807243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.808961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.809005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.809284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.811342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.811411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.811853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.811898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.812182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.812195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.812206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.264 [2024-07-15 18:47:30.812220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.816986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.817055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.817470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.817511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.817970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.819184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.819238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.820516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.820564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.820845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.820858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.820868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.820879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.823207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.823264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.825066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.825110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.825502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.826040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.826092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.826506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.826551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.826831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.826844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.826855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.826865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.830736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.830794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.832377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.832421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.832830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.833363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.833416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.834018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.834064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.834342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.834355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.834366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.834377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.838266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.838324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.839790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.839833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.840226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.840755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.840806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.841463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.841508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.841790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.841803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.841814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.841824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.845789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.845847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.847248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.847292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.847711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.848245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.848300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.849057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.849106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.849386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.849399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.849410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.849420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.853452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.853510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.854778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.527 [2024-07-15 18:47:30.854822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.855282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.855808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.855861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.856764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.856810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.857099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.857112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.857123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.857136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.861361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.861421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.862563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.862608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.863100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.863626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.863677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.864683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.864728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.865013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.865026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.865039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.865049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.869387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.869446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.870608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.870651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.871144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.871680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.871731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.872397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.872446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.872724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.872737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.872747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.872758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.876418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.876475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.876888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.876930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.877497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.878031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.878088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.878505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.878561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.878937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.878956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.878968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.878979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.881354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.881412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.881824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.881865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.882170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.882699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.882760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.883187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.883236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.883608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.883621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.883632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.883643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.886132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.886191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.886613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.886682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.887225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.887758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.887812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.888238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.888289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.888698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.888711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.888721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.888732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.891055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.891112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.891531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.891593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.891990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.892520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.892571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.892993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.893063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.893485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.893499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.893510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.893521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.896086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.896147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.896565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.896636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.897042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.897592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.897644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.898076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.898120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.898469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.898481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.898492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.528 [2024-07-15 18:47:30.898503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.900885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.900957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.900998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.901052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.901466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.902005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.902062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.902103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.902142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.902570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.902583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.902594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.902605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.904477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.904549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.904592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.904631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.905047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.905203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.905247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.905288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.905328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.905756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.905769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.905780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.905791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.907854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.907906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.907946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.907994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.908363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.908517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.908564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.908606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.908647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.908995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.909009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.909019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.909030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.910965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.911016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.911055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.911098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.911392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.911569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.911626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.911666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.911707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.912080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.912093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.912104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.912116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.914070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.914137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.914196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.914237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.914664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.914825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.914888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.914933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.914980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.915405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.915418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.915428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.915438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.917675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.917731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.917785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.917825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.918249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.918405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.918450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.918490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.918529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.918895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.918913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.918924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.918935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.920918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.920977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.921017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.921057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.921480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.921630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.921674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.921714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.921754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.922117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.922132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.922142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.922152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.924345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.924398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.924438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.924477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.924846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.925014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.925060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.529 [2024-07-15 18:47:30.925116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.925170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.925633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.925646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.925656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.925667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.927721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.927779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.928231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.928281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.928730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.928888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.928934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.929356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.929398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.929712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.929726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.929737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.929748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.932099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.932155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.932570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.932611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.932986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.933514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.933575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.934000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.934049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.934551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.934567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.934580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.934594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.937039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.937095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.937509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.937550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.937836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.938371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.938430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.938970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.939019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.939301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.939314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.939325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.939335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.942867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.942929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.944871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.944912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.945391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.947046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.947101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.948430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.948478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.948758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.948770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.948781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.948792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.951719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.951776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.953398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.953447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.953890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.955942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.955997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.957231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.957278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.957562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.957581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.957592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.957603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.964290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.965559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.965607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.965889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.965902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.965912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.967958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.969648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.969697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.970444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.970937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.972673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.972718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.974181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.974467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.974481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.974492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.976549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.978271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.978321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.979029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.979520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.981190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.981246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.982793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.983103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.983117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.983127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.985416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.530 [2024-07-15 18:47:30.987228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.531 [2024-07-15 18:47:30.987278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.531 [2024-07-15 18:47:30.987899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.531 [2024-07-15 18:47:30.988420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.531 [2024-07-15 18:47:30.990028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.531 [2024-07-15 18:47:30.990090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.531 [2024-07-15 18:47:30.991717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.531 [2024-07-15 18:47:30.992064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.531 [2024-07-15 18:47:30.992078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.531 [2024-07-15 18:47:30.992089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.531 [2024-07-15 18:47:30.994426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.531 [2024-07-15 18:47:30.996253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.531 [2024-07-15 18:47:30.996303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.531 [2024-07-15 18:47:30.998033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.531 [2024-07-15 18:47:30.998487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.531 [2024-07-15 18:47:30.999039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.531 [2024-07-15 18:47:30.999089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.531 [2024-07-15 18:47:31.000937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.531 [2024-07-15 18:47:31.001309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.531 [2024-07-15 18:47:31.001322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.531 [2024-07-15 18:47:31.001333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.531 [2024-07-15 18:47:31.004482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.531 [2024-07-15 18:47:31.005919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.531 [2024-07-15 18:47:31.005971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.531 [2024-07-15 18:47:31.006999] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.007072] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.007124] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.007198] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.007273] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.007324] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.007376] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.007422] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.007481] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.007532] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.007579] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.007625] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.007671] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.007717] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.007763] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.007810] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.007856] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.007902] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.007964] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.008017] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.008082] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.008133] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.008180] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.008226] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.008272] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.008319] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.008365] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.008411] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.008457] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.008503] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.008558] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.008633] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.008708] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.008759] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.008810] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.008856] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.008905] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.008955] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.009001] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.009047] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.009093] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.009140] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.009185] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.009232] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.009277] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.009345] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.009425] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.009475] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.009529] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.009579] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.009628] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.009675] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.009722] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.009769] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.009815] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.009861] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.531 [2024-07-15 18:47:31.009907] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.009957] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.010004] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.010050] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.010096] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.010142] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.010195] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.010291] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.010346] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.010397] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.010448] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.010501] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.010547] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.010594] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.010640] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.010686] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.010758] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.010808] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.010854] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.010900] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.010946] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.011000] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.011046] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.011092] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.011138] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.011184] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.011229] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.011277] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.011323] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.011369] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.011415] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.011461] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.011507] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.011553] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.011598] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.011645] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.011695] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.011765] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.011829] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.011879] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.011925] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.027940] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.029161] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.031418] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.032080] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.034033] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.035901] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.037762] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.038325] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.042243] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.043616] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.044566] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.046508] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.049396] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.050296] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.052559] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.053985] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.057271] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.059396] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.060570] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.062863] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.064923] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.067041] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.067849] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.070118] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.073622] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.532 [2024-07-15 18:47:31.074581] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.791 [2024-07-15 18:47:31.075925] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.791 [2024-07-15 18:47:31.077708] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.791 [2024-07-15 18:47:31.079144] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.791 [2024-07-15 18:47:31.081438] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.791 [2024-07-15 18:47:31.082707] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.791 [2024-07-15 18:47:31.083895] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.791 [2024-07-15 18:47:31.087497] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.791 [2024-07-15 18:47:31.089413] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.791 [2024-07-15 18:47:31.091509] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.791 [2024-07-15 18:47:31.093539] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.791 [2024-07-15 18:47:31.095913] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.791 [2024-07-15 18:47:31.097441] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.791 [2024-07-15 18:47:31.099703] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.791 [2024-07-15 18:47:31.100726] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.791 [2024-07-15 18:47:31.103719] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.791 [2024-07-15 18:47:31.104592] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.791 [2024-07-15 18:47:31.106687] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.791 [2024-07-15 18:47:31.108631] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.791 [2024-07-15 18:47:31.111438] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.791 [2024-07-15 18:47:31.112968] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.791 [2024-07-15 18:47:31.113758] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.792 [2024-07-15 18:47:31.115546] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.792 [2024-07-15 18:47:31.119248] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.792 [2024-07-15 18:47:31.121012] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.792 [2024-07-15 18:47:31.123120] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.792 [2024-07-15 18:47:31.124608] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.792 [2024-07-15 18:47:31.127205] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.792 [2024-07-15 18:47:31.129310] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.792 [2024-07-15 18:47:31.130832] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.792 [2024-07-15 18:47:31.133134] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.792 [2024-07-15 18:47:31.135398] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.792 [2024-07-15 18:47:31.137355] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.792 [2024-07-15 18:47:31.139090] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.792 [2024-07-15 18:47:31.139685] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.792 [2024-07-15 18:47:31.142622] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.792 [2024-07-15 18:47:31.144025] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.792 [2024-07-15 18:47:31.146302] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.792 [2024-07-15 18:47:31.148398] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:45.792 [2024-07-15 18:47:31.153849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.156143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.157706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.162651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.164185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.165375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.171740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.173160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.175278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.180351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.181550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.182697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.188477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.190626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.193037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.197515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.198558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.200670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.206534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.208626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.210722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.214264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.216379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.218511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.224296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.226416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.227817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.232349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.234764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.236089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.240350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.242448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.243644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.248639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.250448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.252526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.256160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.258251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.260332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.264431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.266545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.268643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.273529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.275527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.277157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.279666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.281754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.283838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.289389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.289967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.291972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.295861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.297578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.298326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.302047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.303932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.305701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.310878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.312241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.314357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.317919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.319828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.321911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.325576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.326960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.329043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.333978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.335411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.336320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.339346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.792 [2024-07-15 18:47:31.340690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:45.793 [2024-07-15 18:47:31.342445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.054 [2024-07-15 18:47:31.347553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.054 [2024-07-15 18:47:31.348639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.054 [2024-07-15 18:47:31.350764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.054 [2024-07-15 18:47:31.355471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.054 [2024-07-15 18:47:31.356819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.054 [2024-07-15 18:47:31.357817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.054 [2024-07-15 18:47:31.362617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.054 [2024-07-15 18:47:31.364138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.054 [2024-07-15 18:47:31.364957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.054 [2024-07-15 18:47:31.367966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.054 [2024-07-15 18:47:31.369235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.054 [2024-07-15 18:47:31.371317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.054 [2024-07-15 18:47:31.376396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.054 [2024-07-15 18:47:31.378403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.054 [2024-07-15 18:47:31.379314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.054 [2024-07-15 18:47:31.383936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.054 [2024-07-15 18:47:31.385606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.054 [2024-07-15 18:47:31.386266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.054 [2024-07-15 18:47:31.391438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.054 [2024-07-15 18:47:31.392821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.054 [2024-07-15 18:47:31.392880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.054 [2024-07-15 18:47:31.394008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.054 [2024-07-15 18:47:31.394067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.054 [2024-07-15 18:47:31.394091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.054 [2024-07-15 18:47:31.396075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.054 [2024-07-15 18:47:31.399449] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.054 [2024-07-15 18:47:31.399534] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.054 [2024-07-15 18:47:31.400619] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.054 [2024-07-15 18:47:31.400686] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.054 [2024-07-15 18:47:31.401435] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.054 [2024-07-15 18:47:31.401499] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.054 [2024-07-15 18:47:31.401573] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.054 [2024-07-15 18:47:31.401608] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.054 [2024-07-15 18:47:31.401669] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.054 [2024-07-15 18:47:31.403666] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.054 [2024-07-15 18:47:31.404120] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.054 [2024-07-15 18:47:31.404148] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.054 [2024-07-15 18:47:31.407188] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.054 [2024-07-15 18:47:31.407282] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.054 [2024-07-15 18:47:31.408447] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.054 [2024-07-15 18:47:31.408514] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.054 [2024-07-15 18:47:31.409251] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.411369] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.411454] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.413318] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.413733] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.413758] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.416084] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.416171] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.417792] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.417860] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.418470] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.420507] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.420586] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.422107] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.422557] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.422583] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.425967] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.426049] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.427029] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.427097] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.427694] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.428648] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.428719] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.430715] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.431182] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.431208] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.432898] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.432986] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.434993] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.435062] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.435755] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.437784] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.437855] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.439377] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.439772] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.439798] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.443259] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.443352] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.445179] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.445244] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.445850] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.447956] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.448028] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.450012] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.450430] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.450455] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.453932] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.454029] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.454091] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.455946] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.458041] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.458074] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.458517] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.458542] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.458770] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.460008] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.460077] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.460138] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.460725] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.460752] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.464171] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.464245] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.464314] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.464376] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.464745] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.464771] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.464987] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.465061] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.465128] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.465191] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.465664] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.465691] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.467263] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.467340] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.467401] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.467462] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.467880] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.467906] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.468122] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.468189] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.468260] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.468325] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.468789] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.468815] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.470195] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.470267] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.470328] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.055 [2024-07-15 18:47:31.470389] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.470942] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.470974] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.471184] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.471250] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.471315] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.471376] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.471939] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.471971] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.473549] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.473625] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.473692] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.473777] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.474353] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.474380] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.474606] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.474680] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.474759] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.474825] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.475333] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.475360] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.477080] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.477180] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.477246] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.477340] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.477845] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.477872] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.478091] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.478159] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.478248] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.478312] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.478803] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.478830] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.480191] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.480265] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.480349] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.480438] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.480975] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.481017] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.481227] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.481298] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.481361] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.481432] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.482033] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.482060] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.483882] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.483963] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.484028] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.484090] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.484619] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.484646] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.484857] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.484936] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.485018] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.485114] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.485687] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.485715] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.487159] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.487246] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.487310] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.487383] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.487872] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.487900] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.488122] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.488190] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.488280] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.488344] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.488820] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.488848] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.490327] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.490405] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.490488] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.490568] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.491098] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.491124] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.491334] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.491400] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.491462] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.491533] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.491963] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.491989] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.493409] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.493480] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.493542] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.493602] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.494186] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.494214] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.056 [2024-07-15 18:47:31.494425] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.494503] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.494593] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.494657] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.495123] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.495159] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.496445] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.496521] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.496582] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.496642] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.497123] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.497149] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.497358] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.497424] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.497484] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.497576] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.498142] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.498170] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.499598] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.499681] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.499745] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.499806] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.500273] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.500302] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.500516] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.500582] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.500645] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.500705] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.501209] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.501235] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.502735] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.502816] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.502879] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.502939] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.503322] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.503347] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.503553] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.503624] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.503686] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.503748] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.504218] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.504244] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.505489] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.505562] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.505623] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.505684] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.506156] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.506189] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.506396] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.506463] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.506555] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.506619] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.507134] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.507160] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.508488] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.508561] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.508634] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.508707] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.509224] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.509250] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.509453] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.509519] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.509584] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.509654] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.510159] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.510201] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.511525] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.511597] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.511670] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.511745] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.512138] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.512164] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.512381] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.512452] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.512527] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.512602] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.512986] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.513028] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.514962] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.515042] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.515103] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.515163] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.515630] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.515655] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.515863] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.515939] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.516022] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.516086] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.057 [2024-07-15 18:47:31.516535] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.516560] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.517980] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.518054] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.518127] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.518216] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.518756] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.518782] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.518991] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.519059] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.519121] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.519183] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.519695] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.519721] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.521066] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.521138] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.521198] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.521260] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.521799] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.521826] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.522044] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.522112] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.522196] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.522261] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.522761] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.522786] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.524048] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.524134] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.524198] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.524263] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.524686] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.524712] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.524922] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.525021] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.525085] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.525146] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.525601] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.525629] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.527210] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.527288] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.527350] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.527411] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.527868] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.527894] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.528123] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.528197] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.528287] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.528351] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.528769] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.528804] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.530087] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.530162] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.530226] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.530289] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.530663] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.530688] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.530893] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.530968] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.531031] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.531093] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.531466] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.531492] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.532864] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.532938] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.533007] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.533073] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.533452] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.533479] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.533676] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.533742] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.533804] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.533866] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.534413] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.534439] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.535662] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.535742] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.535817] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.535894] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.536274] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.536301] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.536510] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.536584] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.538447] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.540295] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.540861] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.540887] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.542217] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.543727] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.543800] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.545505] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.545884] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.545915] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.058 [2024-07-15 18:47:31.546115] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.546187] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.546743] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.548653] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.549146] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.549172] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.552256] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.552825] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.552893] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.554991] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.555414] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.555439] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.557055] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.558777] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.559556] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.559629] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.560249] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.560277] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.563707] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.565518] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.565597] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.566160] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.566725] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.566751] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.568695] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.568772] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.570272] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.570342] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.570737] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.570765] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.572147] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.573245] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.573314] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.575042] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.575546] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.575572] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.577441] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.577518] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.578824] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.578896] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.579445] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.579473] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.582519] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.584267] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.584337] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.585131] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.585655] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.585682] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.587891] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.587982] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.590033] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.590103] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.590483] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.590508] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.594283] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.596231] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.596303] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.598382] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.598758] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.598783] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.599489] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.599575] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.600902] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.600973] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.601438] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.059 [2024-07-15 18:47:31.601464] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.319 [2024-07-15 18:47:31.606325] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.319 [2024-07-15 18:47:31.607867] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.319 [2024-07-15 18:47:31.607937] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.319 [2024-07-15 18:47:31.609741] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.319 [2024-07-15 18:47:31.610204] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.319 [2024-07-15 18:47:31.610230] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.319 [2024-07-15 18:47:31.612096] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.319 [2024-07-15 18:47:31.612174] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.319 [2024-07-15 18:47:31.613026] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.319 [2024-07-15 18:47:31.613096] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.319 [2024-07-15 18:47:31.613669] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.319 [2024-07-15 18:47:31.613697] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.319 [2024-07-15 18:47:31.620669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.319 [2024-07-15 18:47:31.620734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.319 [2024-07-15 18:47:31.621133] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.319 [2024-07-15 18:47:31.623341] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:46.319 [2024-07-15 18:47:31.623465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.319 [2024-07-15 18:47:31.623481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.319 [2024-07-15 18:47:31.628807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.319 [2024-07-15 18:47:31.630031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.319 [2024-07-15 18:47:31.632120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.319 [2024-07-15 18:47:31.633282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.319 [2024-07-15 18:47:31.633747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.319 [2024-07-15 18:47:31.633956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.319 [2024-07-15 18:47:31.636084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.319 [2024-07-15 18:47:31.638142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.319 [2024-07-15 18:47:31.640218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.319 [2024-07-15 18:47:31.640599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.319 [2024-07-15 18:47:31.640615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.319 [2024-07-15 18:47:31.646599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.319 [2024-07-15 18:47:31.648686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.319 [2024-07-15 18:47:31.650538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.319 [2024-07-15 18:47:31.652617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.319 [2024-07-15 18:47:31.652996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.319 [2024-07-15 18:47:31.655545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.319 [2024-07-15 18:47:31.656626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.319 [2024-07-15 18:47:31.657177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.319 [2024-07-15 18:47:31.659257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.319 [2024-07-15 18:47:31.659632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.319 [2024-07-15 18:47:31.659652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.319 [2024-07-15 18:47:31.667437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.319 [2024-07-15 18:47:31.668432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.668978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.671054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.671446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.673973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.675286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.677358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.679471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.679849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.679866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.686457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.687810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.689925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.692056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.692436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.693118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.693671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.695801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.697939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.698323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.698340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.703554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.704304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.706427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.708543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.708922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.710477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.712603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.714692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.716494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.717108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.717126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.722937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.723503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.724075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.726115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.726499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.728406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.730498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.731055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.731622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.732006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.732025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.737937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.739686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.741684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.742874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.743333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.744028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.744584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.746409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.748494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.748902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.748920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.754792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.756893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.758981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.760394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.760823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.761762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.762322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.764337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.766428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.766811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.766828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.772443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.773886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.775605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.777013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.777485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.779679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.780244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.782211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.783836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.784226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.784244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.790183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.791753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.793875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.794434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.795026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.797243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.799231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.800874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.802546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.803040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.803060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.320 [2024-07-15 18:47:31.805850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.808094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.809358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.809905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.810334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.812470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.813388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.815399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.816670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.817191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.817210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.819689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.821705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.823444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.823994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.824459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.826589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.827787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.829856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.831634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.832247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.832265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.837545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.839656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.840208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.840755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.841138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.843134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.845165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.847253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.847801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.848401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.848419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.853741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.854304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.856396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.858471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.858889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.861037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.861673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.862221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.864321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.864700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.321 [2024-07-15 18:47:31.864717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.581 [2024-07-15 18:47:31.869915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.581 [2024-07-15 18:47:31.871909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.581 [2024-07-15 18:47:31.872923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.581 [2024-07-15 18:47:31.874922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.581 [2024-07-15 18:47:31.875434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.581 [2024-07-15 18:47:31.876122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.581 [2024-07-15 18:47:31.877692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.581 [2024-07-15 18:47:31.879703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.880660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.881083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.881101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.886157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.888185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.889954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.890498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.890984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.893268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.894328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.896423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.898214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.898798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.898817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.902004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.902566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.903120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.903661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.904166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.904852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.905414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.905969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.906513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.907069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.907088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.910142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.910698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.911254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.911814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.912432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.913119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.915215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.916413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.917538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.918042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.918061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.923749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.924314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.925768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.927489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.927945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.929813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.931512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.932066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.933478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.933956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.933974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.939439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.941172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.943587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.944909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.945344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.947487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.948502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.949055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.951136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.951685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.951702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.957449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.958912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.961008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.963095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.963653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.964468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.966405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.967826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.969914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.970302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.970319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.976100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.978119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.978670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.979237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.979617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.982137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.983485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.985564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.986124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.986704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.986729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.993071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.993626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.995712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.997801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:31.998250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:32.000515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:32.002608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:32.003164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:32.003735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:32.004122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:32.004140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:32.009988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:32.011977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:32.014001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:32.016090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:32.016509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:32.017202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:32.017757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:32.019636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:32.021514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:32.021893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:32.021910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:32.027544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:32.029313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:32.029376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:32.031315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:32.031896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:32.032586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.582 [2024-07-15 18:47:32.032656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.032709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.034420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.034804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.034822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.038917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.040779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.040843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.042497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.042877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.045115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.045185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.045239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.045292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.045925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.045943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.049861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.051808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.051872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.052424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.052864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.053067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.053127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.054845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.054906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.055464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.055481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.058828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.060657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.060720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.062729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.063261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.065124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.065193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.066029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.066089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.066636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.066654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.069870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.071606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.071668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.072244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.072796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.075017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.075093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.077131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.077193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.077568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.077585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.081410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.083350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.083413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.085499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.085880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.086567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.086635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.087184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.087906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.093028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.095231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.095291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.095342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.095781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.096476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.096542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.098625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.098692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.099072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.106904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.106981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.108610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.111289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.111359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.113469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.113529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.113903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.120965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.121036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.121089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.121148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.123820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.123888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.123941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.123999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.124414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.129977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.130043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.130118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.130173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.130865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.130924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.130982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.131034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.583 [2024-07-15 18:47:32.131487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.844 [2024-07-15 18:47:32.136845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.844 [2024-07-15 18:47:32.136910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.844 [2024-07-15 18:47:32.136971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.844 [2024-07-15 18:47:32.137023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.844 [2024-07-15 18:47:32.137620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.844 [2024-07-15 18:47:32.137679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.844 [2024-07-15 18:47:32.137739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.844 [2024-07-15 18:47:32.137793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.844 [2024-07-15 18:47:32.138362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.844 [2024-07-15 18:47:32.142035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.844 [2024-07-15 18:47:32.142101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.844 [2024-07-15 18:47:32.142160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.844 [2024-07-15 18:47:32.142213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.844 [2024-07-15 18:47:32.142762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.844 [2024-07-15 18:47:32.142821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.844 [2024-07-15 18:47:32.142876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.844 [2024-07-15 18:47:32.142933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.844 [2024-07-15 18:47:32.143390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.844 [2024-07-15 18:47:32.146868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.844 [2024-07-15 18:47:32.146934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.844 [2024-07-15 18:47:32.147020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.844 [2024-07-15 18:47:32.147074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.844 [2024-07-15 18:47:32.147771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.844 [2024-07-15 18:47:32.147830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.844 [2024-07-15 18:47:32.147881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.147932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.148362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.151706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.151777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.151828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.151885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.152487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.152546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.152598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.152659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.153040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.155256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.155320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.155372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.155424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.155990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.156055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.156106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.156157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.156565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.160028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.160094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.160153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.160205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.160990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.161051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.161103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.161161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.161624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.164425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.164493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.164550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.164602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.165217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.165278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.165331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.165388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.165988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.169332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.169399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.169458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.169511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.170098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.170159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.170230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.170284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.170784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.173831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.173901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.173960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.174012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.174695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.174753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.174806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.174857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.175370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.178825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.178894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.178946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.179005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.179622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.179692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.179757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.179830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.180371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.183488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.183558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.183614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.183673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.184288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.184350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.184406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.184458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.184885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.187732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.187804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.187858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.187922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.188588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.188647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.188700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.188751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.189193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.191070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.191151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.191205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.191277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.191902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.191969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.192029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.192082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.192480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.194394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.194471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.194523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.194574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.195168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.195233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.845 [2024-07-15 18:47:32.195285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.195341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.195824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.197741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.197806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.197872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.197926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.198516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.198577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.198628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.198681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.199062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.202769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.202834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.202887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.202939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.203650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.203709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.203761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.203820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.204332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.207224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.207291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.207343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.207394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.207941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.208011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.208062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.208114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.208573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.210438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.210503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.210568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.210621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.211175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.211240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.211291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.211347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.211765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.213634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.213744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.213802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.213854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.214512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.214575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.214640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.214710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.215090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.218457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.218524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.218576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.218629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.219265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.219326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.219378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.219429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.219933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.223008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.223086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.223138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.223196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.223980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.224053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.224124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.224178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.224605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.227734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.227808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.227860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.227935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.228725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.228786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.228840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.228897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.229354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.231909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.231981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.233272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.233327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.234056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.236148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.238237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.238298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.238714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.243195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.243273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.245351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.245412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.246031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.247753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.248326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.248900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.249288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.253015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.253085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.253625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.253690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.256277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.258063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.846 [2024-07-15 18:47:32.258127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.260123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.260720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.265975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.266047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.268013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.268092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.268868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.269568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.269631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.271365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.271828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.275628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.275699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.277419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.277479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.278123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.280143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.282157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.282216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.282748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.288189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.288271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.290008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.290068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.291649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.291717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.292896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.292962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.293366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.295875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.295946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.296491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.296547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.297284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.297843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.297903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.298450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.298915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.301449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.301522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.302074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.302132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.302821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.303389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.303459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.304009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.304552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.306524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.308622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.310710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.312236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.312934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.313760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.314312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.316395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.316792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.320159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.321345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.323092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.324339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.326853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.327413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.328659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.330390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.330836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.334874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.337002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.339150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.340718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.342861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.343416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.344015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.345737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.346129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.349761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.351833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.353741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.355612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.356775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.358827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.360976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.362391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.362773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.366868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.368976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.371060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.373141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.374317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.376412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.378324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.380114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.380625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.384945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.387078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.389221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.391341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:46.847 [2024-07-15 18:47:32.392514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.394590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.396687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.398755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.399175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.403385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.405492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.407533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.409617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.410659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.411231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.413329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.415397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.415817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.420159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.422258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.423801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.425530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.426641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.428598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.430562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.431974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.432479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.435204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.436956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.438172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.439915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.441050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.442344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.444080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.445267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.445688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.447805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.449565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.451225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.453308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.454357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.454911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.456750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.458606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.459036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.460882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.462986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.465082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.466753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.468318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.468876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.470919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.472933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.473396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.475168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.476842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.478568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.479779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.481968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.482523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.484133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.484840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.485231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.488681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.490784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.492484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.494213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.495395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.497310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.499107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.499656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.500253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.503630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.505455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.108 [2024-07-15 18:47:32.507537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.509569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.511116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.513220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.515346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.517608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.518004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.521385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.521977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.523259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.525366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.528048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.530041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.532124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.534252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.534658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.538474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.540625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.542710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.544735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.547318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.549078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.549631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.551245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.551664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.554802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.556903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.558974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.560828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.562716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.564820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.566966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.569050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.569429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.572720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.573298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.574579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.576297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.578864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.580814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.581362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.582300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.582717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.585800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.586376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.586929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.588114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.109 [2024-07-15 18:47:32.590278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:47.677 00:32:47.677 Latency(us) 00:32:47.677 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.677 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:47.677 Verification LBA range: start 0x0 length 0x100 00:32:47.677 crypto_ram : 5.94 43.13 2.70 0.00 0.00 2882166.00 79891.50 2732289.46 00:32:47.677 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:47.677 Verification LBA range: start 0x100 length 0x100 00:32:47.677 crypto_ram : 6.00 29.98 1.87 0.00 0.00 3866424.27 32955.25 3499247.91 00:32:47.677 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:47.677 Verification LBA range: start 0x0 length 0x100 00:32:47.677 crypto_ram2 : 5.94 43.12 2.70 0.00 0.00 2771599.36 79392.18 2732289.46 00:32:47.677 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:47.677 Verification LBA range: start 0x100 length 0x100 00:32:47.677 crypto_ram2 : 6.06 35.80 2.24 0.00 0.00 3209359.90 44189.99 3499247.91 00:32:47.677 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:47.677 Verification LBA range: start 0x0 length 0x100 00:32:47.677 crypto_ram3 : 5.64 261.72 16.36 0.00 0.00 433490.54 17725.93 619159.16 00:32:47.677 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:47.677 Verification LBA range: start 0x100 length 0x100 00:32:47.677 crypto_ram3 : 5.80 198.12 12.38 0.00 0.00 555715.43 35701.52 647121.19 00:32:47.677 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:47.677 Verification LBA range: start 0x0 length 0x100 00:32:47.677 crypto_ram4 : 5.75 277.71 17.36 0.00 0.00 394565.02 4306.65 619159.16 00:32:47.677 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:47.677 Verification LBA range: start 0x100 length 0x100 00:32:47.677 crypto_ram4 : 5.94 216.24 13.51 0.00 0.00 494832.29 6772.05 603180.86 00:32:47.677 =================================================================================================================== 00:32:47.677 Total : 1105.83 69.11 0.00 0.00 839183.69 4306.65 3499247.91 00:32:48.243 00:32:48.243 real 0m9.217s 00:32:48.243 user 0m17.502s 00:32:48.243 sys 0m0.408s 00:32:48.243 18:47:33 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:48.243 18:47:33 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:32:48.243 ************************************ 00:32:48.243 END TEST bdev_verify_big_io 00:32:48.243 ************************************ 00:32:48.243 18:47:33 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:32:48.243 18:47:33 blockdev_crypto_aesni -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:48.243 18:47:33 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:32:48.243 18:47:33 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:48.243 18:47:33 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:32:48.243 ************************************ 00:32:48.243 START TEST bdev_write_zeroes 00:32:48.243 ************************************ 00:32:48.243 18:47:33 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:48.243 [2024-07-15 18:47:33.702960] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:32:48.243 [2024-07-15 18:47:33.703073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2989207 ] 00:32:48.501 [2024-07-15 18:47:33.840152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:48.501 [2024-07-15 18:47:33.937623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:48.501 [2024-07-15 18:47:33.958909] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:32:48.501 [2024-07-15 18:47:33.966937] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:32:48.501 [2024-07-15 18:47:33.974962] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:32:48.760 [2024-07-15 18:47:34.078552] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:32:51.292 [2024-07-15 18:47:36.267791] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:32:51.292 [2024-07-15 18:47:36.267844] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:51.292 [2024-07-15 18:47:36.267856] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:51.292 [2024-07-15 18:47:36.275810] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:32:51.292 [2024-07-15 18:47:36.275828] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:51.292 [2024-07-15 18:47:36.275837] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:51.292 [2024-07-15 18:47:36.283832] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:32:51.292 [2024-07-15 18:47:36.283847] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:32:51.292 [2024-07-15 18:47:36.283855] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:51.292 [2024-07-15 18:47:36.291852] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:32:51.292 [2024-07-15 18:47:36.291867] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:32:51.292 [2024-07-15 18:47:36.291875] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:51.292 Running I/O for 1 seconds... 00:32:51.895 00:32:51.895 Latency(us) 00:32:51.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:51.895 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:32:51.895 crypto_ram : 1.03 1796.55 7.02 0.00 0.00 70674.82 5960.66 84884.72 00:32:51.895 Job: crypto_ram2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:32:51.895 crypto_ram2 : 1.03 1809.82 7.07 0.00 0.00 69815.12 5929.45 78892.86 00:32:51.895 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:32:51.895 crypto_ram3 : 1.02 13765.40 53.77 0.00 0.00 9156.49 2699.46 11796.48 00:32:51.895 Job: crypto_ram4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:32:51.895 crypto_ram4 : 1.02 13802.85 53.92 0.00 0.00 9101.50 2699.46 9487.12 00:32:51.895 =================================================================================================================== 00:32:51.895 Total : 31174.61 121.78 0.00 0.00 16232.12 2699.46 84884.72 00:32:52.461 00:32:52.461 real 0m4.153s 00:32:52.461 user 0m3.788s 00:32:52.461 sys 0m0.323s 00:32:52.461 18:47:37 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:52.461 18:47:37 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:32:52.461 ************************************ 00:32:52.461 END TEST bdev_write_zeroes 00:32:52.461 ************************************ 00:32:52.461 18:47:37 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:32:52.461 18:47:37 blockdev_crypto_aesni -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:52.461 18:47:37 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:32:52.461 18:47:37 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:52.461 18:47:37 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:32:52.461 ************************************ 00:32:52.461 START TEST bdev_json_nonenclosed 00:32:52.461 ************************************ 00:32:52.461 18:47:37 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:52.461 [2024-07-15 18:47:37.897209] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:32:52.461 [2024-07-15 18:47:37.897322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2989848 ] 00:32:52.720 [2024-07-15 18:47:38.034916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:52.720 [2024-07-15 18:47:38.126869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:52.720 [2024-07-15 18:47:38.126934] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:32:52.720 [2024-07-15 18:47:38.126956] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:32:52.720 [2024-07-15 18:47:38.126966] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:52.720 00:32:52.720 real 0m0.417s 00:32:52.720 user 0m0.263s 00:32:52.720 sys 0m0.149s 00:32:52.720 18:47:38 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:32:52.720 18:47:38 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:52.720 18:47:38 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:32:52.720 ************************************ 00:32:52.720 END TEST bdev_json_nonenclosed 00:32:52.720 ************************************ 00:32:52.720 18:47:38 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 234 00:32:52.720 18:47:38 blockdev_crypto_aesni -- bdev/blockdev.sh@782 -- # true 00:32:52.720 18:47:38 blockdev_crypto_aesni -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:52.720 18:47:38 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:32:52.720 18:47:38 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:52.720 18:47:38 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:32:52.720 ************************************ 00:32:52.720 START TEST bdev_json_nonarray 00:32:52.720 ************************************ 00:32:52.720 18:47:38 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:52.979 [2024-07-15 18:47:38.313959] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:32:52.979 [2024-07-15 18:47:38.314017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2989886 ] 00:32:52.979 [2024-07-15 18:47:38.410500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:52.979 [2024-07-15 18:47:38.500977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:52.979 [2024-07-15 18:47:38.501052] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:32:52.979 [2024-07-15 18:47:38.501069] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:32:52.979 [2024-07-15 18:47:38.501078] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:53.238 00:32:53.238 real 0m0.335s 00:32:53.238 user 0m0.224s 00:32:53.238 sys 0m0.109s 00:32:53.238 18:47:38 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:32:53.238 18:47:38 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:53.238 18:47:38 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:32:53.238 ************************************ 00:32:53.238 END TEST bdev_json_nonarray 00:32:53.238 ************************************ 00:32:53.238 18:47:38 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 234 00:32:53.238 18:47:38 blockdev_crypto_aesni -- bdev/blockdev.sh@785 -- # true 00:32:53.238 18:47:38 blockdev_crypto_aesni -- bdev/blockdev.sh@787 -- # [[ crypto_aesni == bdev ]] 00:32:53.238 18:47:38 blockdev_crypto_aesni -- bdev/blockdev.sh@794 -- # [[ crypto_aesni == gpt ]] 00:32:53.238 18:47:38 blockdev_crypto_aesni -- bdev/blockdev.sh@798 -- # [[ crypto_aesni == crypto_sw ]] 00:32:53.238 18:47:38 blockdev_crypto_aesni -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:32:53.238 18:47:38 blockdev_crypto_aesni -- bdev/blockdev.sh@811 -- # cleanup 00:32:53.238 18:47:38 blockdev_crypto_aesni -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:32:53.238 18:47:38 blockdev_crypto_aesni -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:32:53.238 18:47:38 blockdev_crypto_aesni -- bdev/blockdev.sh@26 -- # [[ crypto_aesni == rbd ]] 00:32:53.238 18:47:38 blockdev_crypto_aesni -- bdev/blockdev.sh@30 -- # [[ crypto_aesni == daos ]] 00:32:53.238 18:47:38 blockdev_crypto_aesni -- bdev/blockdev.sh@34 -- # [[ crypto_aesni = \g\p\t ]] 00:32:53.238 18:47:38 blockdev_crypto_aesni -- bdev/blockdev.sh@40 -- # [[ crypto_aesni == xnvme ]] 00:32:53.238 00:32:53.238 real 1m12.739s 00:32:53.238 user 2m49.152s 00:32:53.238 sys 0m7.284s 00:32:53.238 18:47:38 blockdev_crypto_aesni -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:53.238 18:47:38 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:32:53.238 ************************************ 00:32:53.238 END TEST blockdev_crypto_aesni 00:32:53.238 ************************************ 00:32:53.238 18:47:38 -- common/autotest_common.sh@1142 -- # return 0 00:32:53.238 18:47:38 -- spdk/autotest.sh@358 -- # run_test blockdev_crypto_sw /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:32:53.238 18:47:38 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:53.238 18:47:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:53.238 18:47:38 -- common/autotest_common.sh@10 -- # set +x 00:32:53.238 ************************************ 00:32:53.238 START TEST blockdev_crypto_sw 00:32:53.238 ************************************ 00:32:53.238 18:47:38 blockdev_crypto_sw -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:32:53.238 * Looking for test storage... 00:32:53.238 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:32:53.238 18:47:38 blockdev_crypto_sw -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:32:53.238 18:47:38 blockdev_crypto_sw -- bdev/nbd_common.sh@6 -- # set -e 00:32:53.238 18:47:38 blockdev_crypto_sw -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:32:53.238 18:47:38 blockdev_crypto_sw -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:32:53.238 18:47:38 blockdev_crypto_sw -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:32:53.238 18:47:38 blockdev_crypto_sw -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:32:53.238 18:47:38 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:32:53.238 18:47:38 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:32:53.238 18:47:38 blockdev_crypto_sw -- bdev/blockdev.sh@20 -- # : 00:32:53.238 18:47:38 blockdev_crypto_sw -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:32:53.238 18:47:38 blockdev_crypto_sw -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:32:53.238 18:47:38 blockdev_crypto_sw -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:32:53.238 18:47:38 blockdev_crypto_sw -- bdev/blockdev.sh@674 -- # uname -s 00:32:53.238 18:47:38 blockdev_crypto_sw -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:32:53.238 18:47:38 blockdev_crypto_sw -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:32:53.238 18:47:38 blockdev_crypto_sw -- bdev/blockdev.sh@682 -- # test_type=crypto_sw 00:32:53.238 18:47:38 blockdev_crypto_sw -- bdev/blockdev.sh@683 -- # crypto_device= 00:32:53.238 18:47:38 blockdev_crypto_sw -- bdev/blockdev.sh@684 -- # dek= 00:32:53.238 18:47:38 blockdev_crypto_sw -- bdev/blockdev.sh@685 -- # env_ctx= 00:32:53.238 18:47:38 blockdev_crypto_sw -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:32:53.238 18:47:38 blockdev_crypto_sw -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:32:53.238 18:47:38 blockdev_crypto_sw -- bdev/blockdev.sh@690 -- # [[ crypto_sw == bdev ]] 00:32:53.238 18:47:38 blockdev_crypto_sw -- bdev/blockdev.sh@690 -- # [[ crypto_sw == crypto_* ]] 00:32:53.238 18:47:38 blockdev_crypto_sw -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:32:53.238 18:47:38 blockdev_crypto_sw -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:32:53.238 18:47:38 blockdev_crypto_sw -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=2989948 00:32:53.238 18:47:38 blockdev_crypto_sw -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:32:53.238 18:47:38 blockdev_crypto_sw -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:32:53.238 18:47:38 blockdev_crypto_sw -- bdev/blockdev.sh@49 -- # waitforlisten 2989948 00:32:53.238 18:47:38 blockdev_crypto_sw -- common/autotest_common.sh@829 -- # '[' -z 2989948 ']' 00:32:53.238 18:47:38 blockdev_crypto_sw -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:53.238 18:47:38 blockdev_crypto_sw -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:53.238 18:47:38 blockdev_crypto_sw -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:53.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:53.238 18:47:38 blockdev_crypto_sw -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:53.238 18:47:38 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:53.497 [2024-07-15 18:47:38.848353] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:32:53.497 [2024-07-15 18:47:38.848416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2989948 ] 00:32:53.497 [2024-07-15 18:47:38.944372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:53.497 [2024-07-15 18:47:39.036805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:54.432 18:47:39 blockdev_crypto_sw -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:54.432 18:47:39 blockdev_crypto_sw -- common/autotest_common.sh@862 -- # return 0 00:32:54.432 18:47:39 blockdev_crypto_sw -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:32:54.432 18:47:39 blockdev_crypto_sw -- bdev/blockdev.sh@711 -- # setup_crypto_sw_conf 00:32:54.432 18:47:39 blockdev_crypto_sw -- bdev/blockdev.sh@193 -- # rpc_cmd 00:32:54.432 18:47:39 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.432 18:47:39 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:54.691 Malloc0 00:32:54.691 Malloc1 00:32:54.691 true 00:32:54.691 true 00:32:54.691 true 00:32:54.691 [2024-07-15 18:47:40.059798] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:32:54.691 crypto_ram 00:32:54.691 [2024-07-15 18:47:40.067826] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:32:54.691 crypto_ram2 00:32:54.691 [2024-07-15 18:47:40.075847] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:32:54.691 crypto_ram3 00:32:54.691 [ 00:32:54.691 { 00:32:54.691 "name": "Malloc1", 00:32:54.691 "aliases": [ 00:32:54.691 "d3f9d383-8cb1-4a1c-ac78-959c6c2baef2" 00:32:54.691 ], 00:32:54.691 "product_name": "Malloc disk", 00:32:54.691 "block_size": 4096, 00:32:54.691 "num_blocks": 4096, 00:32:54.691 "uuid": "d3f9d383-8cb1-4a1c-ac78-959c6c2baef2", 00:32:54.691 "assigned_rate_limits": { 00:32:54.691 "rw_ios_per_sec": 0, 00:32:54.691 "rw_mbytes_per_sec": 0, 00:32:54.691 "r_mbytes_per_sec": 0, 00:32:54.691 "w_mbytes_per_sec": 0 00:32:54.691 }, 00:32:54.691 "claimed": true, 00:32:54.691 "claim_type": "exclusive_write", 00:32:54.691 "zoned": false, 00:32:54.691 "supported_io_types": { 00:32:54.691 "read": true, 00:32:54.691 "write": true, 00:32:54.691 "unmap": true, 00:32:54.691 "flush": true, 00:32:54.691 "reset": true, 00:32:54.691 "nvme_admin": false, 00:32:54.691 "nvme_io": false, 00:32:54.691 "nvme_io_md": false, 00:32:54.691 "write_zeroes": true, 00:32:54.691 "zcopy": true, 00:32:54.691 "get_zone_info": false, 00:32:54.691 "zone_management": false, 00:32:54.691 "zone_append": false, 00:32:54.691 "compare": false, 00:32:54.691 "compare_and_write": false, 00:32:54.691 "abort": true, 00:32:54.691 "seek_hole": false, 00:32:54.691 "seek_data": false, 00:32:54.691 "copy": true, 00:32:54.691 "nvme_iov_md": false 00:32:54.691 }, 00:32:54.691 "memory_domains": [ 00:32:54.691 { 00:32:54.691 "dma_device_id": "system", 00:32:54.691 "dma_device_type": 1 00:32:54.691 }, 00:32:54.691 { 00:32:54.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:54.691 "dma_device_type": 2 00:32:54.691 } 00:32:54.691 ], 00:32:54.691 "driver_specific": {} 00:32:54.691 } 00:32:54.691 ] 00:32:54.691 18:47:40 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.691 18:47:40 blockdev_crypto_sw -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:32:54.691 18:47:40 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.691 18:47:40 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:54.691 18:47:40 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.691 18:47:40 blockdev_crypto_sw -- bdev/blockdev.sh@740 -- # cat 00:32:54.691 18:47:40 blockdev_crypto_sw -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:32:54.691 18:47:40 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.691 18:47:40 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:54.691 18:47:40 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.691 18:47:40 blockdev_crypto_sw -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:32:54.691 18:47:40 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.691 18:47:40 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:54.691 18:47:40 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.691 18:47:40 blockdev_crypto_sw -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:32:54.691 18:47:40 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.691 18:47:40 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:54.691 18:47:40 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.691 18:47:40 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:32:54.691 18:47:40 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:32:54.691 18:47:40 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:32:54.691 18:47:40 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.691 18:47:40 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:54.691 18:47:40 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.691 18:47:40 blockdev_crypto_sw -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:32:54.691 18:47:40 blockdev_crypto_sw -- bdev/blockdev.sh@749 -- # jq -r .name 00:32:54.691 18:47:40 blockdev_crypto_sw -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "dc23b25c-f815-53b4-8b0b-f262f75bfe65"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "dc23b25c-f815-53b4-8b0b-f262f75bfe65",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "c5fb77b7-ae20-5023-8580-bff8cac6f7b1"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "c5fb77b7-ae20-5023-8580-bff8cac6f7b1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:32:54.950 18:47:40 blockdev_crypto_sw -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:32:54.950 18:47:40 blockdev_crypto_sw -- bdev/blockdev.sh@752 -- # hello_world_bdev=crypto_ram 00:32:54.950 18:47:40 blockdev_crypto_sw -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:32:54.950 18:47:40 blockdev_crypto_sw -- bdev/blockdev.sh@754 -- # killprocess 2989948 00:32:54.950 18:47:40 blockdev_crypto_sw -- common/autotest_common.sh@948 -- # '[' -z 2989948 ']' 00:32:54.950 18:47:40 blockdev_crypto_sw -- common/autotest_common.sh@952 -- # kill -0 2989948 00:32:54.950 18:47:40 blockdev_crypto_sw -- common/autotest_common.sh@953 -- # uname 00:32:54.950 18:47:40 blockdev_crypto_sw -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:54.950 18:47:40 blockdev_crypto_sw -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2989948 00:32:54.950 18:47:40 blockdev_crypto_sw -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:54.950 18:47:40 blockdev_crypto_sw -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:54.950 18:47:40 blockdev_crypto_sw -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2989948' 00:32:54.950 killing process with pid 2989948 00:32:54.950 18:47:40 blockdev_crypto_sw -- common/autotest_common.sh@967 -- # kill 2989948 00:32:54.950 18:47:40 blockdev_crypto_sw -- common/autotest_common.sh@972 -- # wait 2989948 00:32:55.209 18:47:40 blockdev_crypto_sw -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:55.209 18:47:40 blockdev_crypto_sw -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:32:55.209 18:47:40 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:32:55.209 18:47:40 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:55.209 18:47:40 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:55.209 ************************************ 00:32:55.209 START TEST bdev_hello_world 00:32:55.209 ************************************ 00:32:55.209 18:47:40 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:32:55.209 [2024-07-15 18:47:40.711192] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:32:55.209 [2024-07-15 18:47:40.711251] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2990377 ] 00:32:55.470 [2024-07-15 18:47:40.810493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:55.470 [2024-07-15 18:47:40.901232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:55.769 [2024-07-15 18:47:41.069464] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:32:55.769 [2024-07-15 18:47:41.069523] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:55.769 [2024-07-15 18:47:41.069536] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:55.769 [2024-07-15 18:47:41.077482] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:32:55.769 [2024-07-15 18:47:41.077500] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:55.769 [2024-07-15 18:47:41.077509] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:55.769 [2024-07-15 18:47:41.085503] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:32:55.769 [2024-07-15 18:47:41.085519] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:32:55.769 [2024-07-15 18:47:41.085527] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:55.769 [2024-07-15 18:47:41.125637] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:32:55.769 [2024-07-15 18:47:41.125670] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:32:55.769 [2024-07-15 18:47:41.125686] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:32:55.769 [2024-07-15 18:47:41.127406] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:32:55.769 [2024-07-15 18:47:41.127469] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:32:55.769 [2024-07-15 18:47:41.127482] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:32:55.769 [2024-07-15 18:47:41.127514] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:32:55.769 00:32:55.769 [2024-07-15 18:47:41.127528] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:32:56.028 00:32:56.028 real 0m0.671s 00:32:56.028 user 0m0.463s 00:32:56.028 sys 0m0.194s 00:32:56.028 18:47:41 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:56.028 18:47:41 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:32:56.028 ************************************ 00:32:56.028 END TEST bdev_hello_world 00:32:56.028 ************************************ 00:32:56.028 18:47:41 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:32:56.028 18:47:41 blockdev_crypto_sw -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:32:56.028 18:47:41 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:56.028 18:47:41 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:56.028 18:47:41 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:56.028 ************************************ 00:32:56.028 START TEST bdev_bounds 00:32:56.028 ************************************ 00:32:56.028 18:47:41 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:32:56.028 18:47:41 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=2990413 00:32:56.028 18:47:41 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:32:56.028 18:47:41 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@289 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:32:56.028 18:47:41 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 2990413' 00:32:56.028 Process bdevio pid: 2990413 00:32:56.028 18:47:41 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 2990413 00:32:56.028 18:47:41 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 2990413 ']' 00:32:56.028 18:47:41 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:56.028 18:47:41 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:56.028 18:47:41 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:56.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:56.028 18:47:41 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:56.028 18:47:41 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:32:56.028 [2024-07-15 18:47:41.423107] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:32:56.028 [2024-07-15 18:47:41.423167] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2990413 ] 00:32:56.028 [2024-07-15 18:47:41.519998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:56.286 [2024-07-15 18:47:41.617776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:56.286 [2024-07-15 18:47:41.617879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:56.286 [2024-07-15 18:47:41.617880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:56.287 [2024-07-15 18:47:41.778498] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:32:56.287 [2024-07-15 18:47:41.778563] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:56.287 [2024-07-15 18:47:41.778575] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:56.287 [2024-07-15 18:47:41.786519] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:32:56.287 [2024-07-15 18:47:41.786535] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:56.287 [2024-07-15 18:47:41.786543] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:56.287 [2024-07-15 18:47:41.794542] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:32:56.287 [2024-07-15 18:47:41.794558] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:32:56.287 [2024-07-15 18:47:41.794566] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:56.852 18:47:42 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:56.852 18:47:42 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:32:56.852 18:47:42 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:32:57.110 I/O targets: 00:32:57.111 crypto_ram: 32768 blocks of 512 bytes (16 MiB) 00:32:57.111 crypto_ram3: 4096 blocks of 4096 bytes (16 MiB) 00:32:57.111 00:32:57.111 00:32:57.111 CUnit - A unit testing framework for C - Version 2.1-3 00:32:57.111 http://cunit.sourceforge.net/ 00:32:57.111 00:32:57.111 00:32:57.111 Suite: bdevio tests on: crypto_ram3 00:32:57.111 Test: blockdev write read block ...passed 00:32:57.111 Test: blockdev write zeroes read block ...passed 00:32:57.111 Test: blockdev write zeroes read no split ...passed 00:32:57.111 Test: blockdev write zeroes read split ...passed 00:32:57.111 Test: blockdev write zeroes read split partial ...passed 00:32:57.111 Test: blockdev reset ...passed 00:32:57.111 Test: blockdev write read 8 blocks ...passed 00:32:57.111 Test: blockdev write read size > 128k ...passed 00:32:57.111 Test: blockdev write read invalid size ...passed 00:32:57.111 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:57.111 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:57.111 Test: blockdev write read max offset ...passed 00:32:57.111 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:57.111 Test: blockdev writev readv 8 blocks ...passed 00:32:57.111 Test: blockdev writev readv 30 x 1block ...passed 00:32:57.111 Test: blockdev writev readv block ...passed 00:32:57.111 Test: blockdev writev readv size > 128k ...passed 00:32:57.111 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:57.111 Test: blockdev comparev and writev ...passed 00:32:57.111 Test: blockdev nvme passthru rw ...passed 00:32:57.111 Test: blockdev nvme passthru vendor specific ...passed 00:32:57.111 Test: blockdev nvme admin passthru ...passed 00:32:57.111 Test: blockdev copy ...passed 00:32:57.111 Suite: bdevio tests on: crypto_ram 00:32:57.111 Test: blockdev write read block ...passed 00:32:57.111 Test: blockdev write zeroes read block ...passed 00:32:57.111 Test: blockdev write zeroes read no split ...passed 00:32:57.111 Test: blockdev write zeroes read split ...passed 00:32:57.111 Test: blockdev write zeroes read split partial ...passed 00:32:57.111 Test: blockdev reset ...passed 00:32:57.111 Test: blockdev write read 8 blocks ...passed 00:32:57.111 Test: blockdev write read size > 128k ...passed 00:32:57.111 Test: blockdev write read invalid size ...passed 00:32:57.111 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:57.111 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:57.111 Test: blockdev write read max offset ...passed 00:32:57.111 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:57.111 Test: blockdev writev readv 8 blocks ...passed 00:32:57.111 Test: blockdev writev readv 30 x 1block ...passed 00:32:57.111 Test: blockdev writev readv block ...passed 00:32:57.111 Test: blockdev writev readv size > 128k ...passed 00:32:57.111 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:57.111 Test: blockdev comparev and writev ...passed 00:32:57.111 Test: blockdev nvme passthru rw ...passed 00:32:57.111 Test: blockdev nvme passthru vendor specific ...passed 00:32:57.111 Test: blockdev nvme admin passthru ...passed 00:32:57.111 Test: blockdev copy ...passed 00:32:57.111 00:32:57.111 Run Summary: Type Total Ran Passed Failed Inactive 00:32:57.111 suites 2 2 n/a 0 0 00:32:57.111 tests 46 46 46 0 0 00:32:57.111 asserts 260 260 260 0 n/a 00:32:57.111 00:32:57.111 Elapsed time = 0.193 seconds 00:32:57.111 0 00:32:57.111 18:47:42 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 2990413 00:32:57.111 18:47:42 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 2990413 ']' 00:32:57.111 18:47:42 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 2990413 00:32:57.111 18:47:42 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:32:57.111 18:47:42 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:57.111 18:47:42 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2990413 00:32:57.370 18:47:42 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:57.370 18:47:42 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:57.370 18:47:42 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2990413' 00:32:57.370 killing process with pid 2990413 00:32:57.370 18:47:42 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@967 -- # kill 2990413 00:32:57.370 18:47:42 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@972 -- # wait 2990413 00:32:57.370 18:47:42 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:32:57.370 00:32:57.370 real 0m1.505s 00:32:57.370 user 0m4.100s 00:32:57.370 sys 0m0.298s 00:32:57.370 18:47:42 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:57.370 18:47:42 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:32:57.370 ************************************ 00:32:57.370 END TEST bdev_bounds 00:32:57.370 ************************************ 00:32:57.370 18:47:42 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:32:57.370 18:47:42 blockdev_crypto_sw -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:32:57.370 18:47:42 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:32:57.370 18:47:42 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:57.370 18:47:42 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:57.370 ************************************ 00:32:57.370 START TEST bdev_nbd 00:32:57.370 ************************************ 00:32:57.370 18:47:42 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:32:57.370 18:47:42 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:32:57.370 18:47:42 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:32:57.370 18:47:42 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:57.370 18:47:42 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:32:57.370 18:47:42 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('crypto_ram' 'crypto_ram3') 00:32:57.370 18:47:42 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:32:57.370 18:47:42 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=2 00:32:57.370 18:47:42 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:32:57.629 18:47:42 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:32:57.629 18:47:42 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:32:57.629 18:47:42 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=2 00:32:57.629 18:47:42 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:57.629 18:47:42 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:32:57.629 18:47:42 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:32:57.629 18:47:42 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:32:57.629 18:47:42 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=2990654 00:32:57.629 18:47:42 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:32:57.629 18:47:42 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:32:57.629 18:47:42 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 2990654 /var/tmp/spdk-nbd.sock 00:32:57.629 18:47:42 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 2990654 ']' 00:32:57.629 18:47:42 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:32:57.629 18:47:42 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:57.629 18:47:42 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:32:57.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:32:57.629 18:47:42 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:57.629 18:47:42 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:32:57.629 [2024-07-15 18:47:42.978977] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:32:57.629 [2024-07-15 18:47:42.979036] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:57.629 [2024-07-15 18:47:43.075470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.629 [2024-07-15 18:47:43.170034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.887 [2024-07-15 18:47:43.337338] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:32:57.887 [2024-07-15 18:47:43.337410] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:57.887 [2024-07-15 18:47:43.337423] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:57.887 [2024-07-15 18:47:43.345358] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:32:57.887 [2024-07-15 18:47:43.345376] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:57.887 [2024-07-15 18:47:43.345384] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:57.887 [2024-07-15 18:47:43.353379] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:32:57.887 [2024-07-15 18:47:43.353396] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:32:57.887 [2024-07-15 18:47:43.353404] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:58.451 18:47:43 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:58.451 18:47:43 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:32:58.451 18:47:43 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:32:58.451 18:47:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:58.451 18:47:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:32:58.451 18:47:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:32:58.451 18:47:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:32:58.451 18:47:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:58.451 18:47:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:32:58.451 18:47:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:32:58.451 18:47:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:32:58.451 18:47:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:32:58.451 18:47:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:32:58.451 18:47:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:32:58.451 18:47:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:32:58.710 18:47:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:32:58.710 18:47:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:32:58.710 18:47:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:32:58.710 18:47:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:32:58.710 18:47:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:32:58.710 18:47:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:32:58.710 18:47:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:32:58.710 18:47:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:32:58.710 18:47:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:32:58.710 18:47:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:32:58.710 18:47:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:32:58.710 18:47:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:58.710 1+0 records in 00:32:58.710 1+0 records out 00:32:58.710 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230204 s, 17.8 MB/s 00:32:58.710 18:47:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:58.710 18:47:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:32:58.710 18:47:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:58.710 18:47:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:32:58.710 18:47:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:32:58.710 18:47:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:32:58.710 18:47:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:32:58.710 18:47:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:32:58.969 18:47:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:32:58.969 18:47:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:32:58.969 18:47:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:32:58.969 18:47:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:32:58.969 18:47:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:32:58.969 18:47:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:32:58.969 18:47:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:32:58.969 18:47:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:32:58.969 18:47:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:32:58.969 18:47:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:32:58.969 18:47:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:32:58.969 18:47:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:58.969 1+0 records in 00:32:58.969 1+0 records out 00:32:58.969 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271556 s, 15.1 MB/s 00:32:58.969 18:47:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:58.969 18:47:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:32:58.969 18:47:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:58.969 18:47:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:32:58.969 18:47:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:32:58.969 18:47:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:32:58.969 18:47:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:32:58.969 18:47:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:59.227 18:47:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:32:59.227 { 00:32:59.227 "nbd_device": "/dev/nbd0", 00:32:59.227 "bdev_name": "crypto_ram" 00:32:59.227 }, 00:32:59.227 { 00:32:59.227 "nbd_device": "/dev/nbd1", 00:32:59.227 "bdev_name": "crypto_ram3" 00:32:59.227 } 00:32:59.227 ]' 00:32:59.227 18:47:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:32:59.227 18:47:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:32:59.227 { 00:32:59.227 "nbd_device": "/dev/nbd0", 00:32:59.227 "bdev_name": "crypto_ram" 00:32:59.227 }, 00:32:59.227 { 00:32:59.227 "nbd_device": "/dev/nbd1", 00:32:59.227 "bdev_name": "crypto_ram3" 00:32:59.227 } 00:32:59.227 ]' 00:32:59.227 18:47:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:32:59.485 18:47:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:32:59.485 18:47:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:59.485 18:47:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:59.485 18:47:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:59.485 18:47:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:32:59.485 18:47:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:59.485 18:47:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:59.743 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:59.743 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:59.743 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:59.743 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:59.743 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:59.743 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:59.743 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:59.743 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:59.743 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:59.743 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:33:00.002 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:00.002 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:00.002 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:00.002 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:00.002 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:00.002 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:00.002 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:00.002 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:00.002 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:00.002 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:00.002 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:00.260 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:00.260 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:00.260 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:00.260 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:00.260 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:33:00.260 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:00.260 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:33:00.260 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:33:00.260 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:33:00.260 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:33:00.260 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:33:00.260 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:33:00.260 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:33:00.260 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:00.260 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:33:00.260 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:33:00.260 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:00.260 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:33:00.260 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:33:00.260 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:00.260 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:33:00.260 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:00.260 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:00.260 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:00.260 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:33:00.260 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:00.260 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:00.260 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:33:00.518 /dev/nbd0 00:33:00.518 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:00.518 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:00.518 18:47:45 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:33:00.518 18:47:45 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:33:00.518 18:47:45 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:00.518 18:47:45 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:00.518 18:47:45 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:33:00.518 18:47:45 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:33:00.518 18:47:45 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:00.518 18:47:45 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:00.518 18:47:45 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:00.518 1+0 records in 00:33:00.518 1+0 records out 00:33:00.518 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024747 s, 16.6 MB/s 00:33:00.518 18:47:45 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:00.518 18:47:45 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:33:00.518 18:47:45 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:00.518 18:47:45 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:00.518 18:47:45 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:33:00.518 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:00.518 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:00.518 18:47:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd1 00:33:00.776 /dev/nbd1 00:33:00.776 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:00.776 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:00.776 18:47:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:33:00.776 18:47:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:33:00.776 18:47:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:00.776 18:47:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:00.776 18:47:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:33:00.776 18:47:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:33:00.776 18:47:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:00.776 18:47:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:00.776 18:47:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:00.776 1+0 records in 00:33:00.776 1+0 records out 00:33:00.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267588 s, 15.3 MB/s 00:33:00.776 18:47:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:00.776 18:47:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:33:00.776 18:47:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:00.776 18:47:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:00.776 18:47:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:33:00.776 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:00.776 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:00.776 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:00.776 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:00.776 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:01.034 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:33:01.034 { 00:33:01.034 "nbd_device": "/dev/nbd0", 00:33:01.034 "bdev_name": "crypto_ram" 00:33:01.034 }, 00:33:01.034 { 00:33:01.034 "nbd_device": "/dev/nbd1", 00:33:01.034 "bdev_name": "crypto_ram3" 00:33:01.034 } 00:33:01.034 ]' 00:33:01.034 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:33:01.034 { 00:33:01.034 "nbd_device": "/dev/nbd0", 00:33:01.034 "bdev_name": "crypto_ram" 00:33:01.034 }, 00:33:01.034 { 00:33:01.034 "nbd_device": "/dev/nbd1", 00:33:01.034 "bdev_name": "crypto_ram3" 00:33:01.034 } 00:33:01.034 ]' 00:33:01.034 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:33:01.292 /dev/nbd1' 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:33:01.292 /dev/nbd1' 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:33:01.292 256+0 records in 00:33:01.292 256+0 records out 00:33:01.292 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00975267 s, 108 MB/s 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:33:01.292 256+0 records in 00:33:01.292 256+0 records out 00:33:01.292 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021383 s, 49.0 MB/s 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:33:01.292 256+0 records in 00:33:01.292 256+0 records out 00:33:01.292 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0323105 s, 32.5 MB/s 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:01.292 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:01.551 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:01.551 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:01.551 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:01.551 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:01.551 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:01.551 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:01.551 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:01.551 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:01.551 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:01.551 18:47:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:33:01.809 18:47:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:01.809 18:47:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:01.809 18:47:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:01.809 18:47:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:01.809 18:47:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:01.809 18:47:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:01.809 18:47:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:01.809 18:47:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:01.809 18:47:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:01.809 18:47:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:01.809 18:47:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:02.066 18:47:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:02.066 18:47:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:02.066 18:47:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:02.066 18:47:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:02.066 18:47:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:33:02.066 18:47:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:02.066 18:47:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:33:02.066 18:47:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:33:02.066 18:47:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:33:02.066 18:47:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:33:02.066 18:47:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:33:02.066 18:47:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:33:02.066 18:47:47 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:33:02.066 18:47:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:02.066 18:47:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:02.066 18:47:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:33:02.067 18:47:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:33:02.067 18:47:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:33:02.323 malloc_lvol_verify 00:33:02.323 18:47:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:33:02.581 1d038f9c-d393-475c-bf52-7f2d2de8fdfa 00:33:02.581 18:47:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:33:02.838 0a892dfa-e744-4091-b7aa-1781ea61aea6 00:33:02.838 18:47:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:33:03.096 /dev/nbd0 00:33:03.096 18:47:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:33:03.096 mke2fs 1.46.5 (30-Dec-2021) 00:33:03.097 Discarding device blocks: 0/4096 done 00:33:03.097 Creating filesystem with 4096 1k blocks and 1024 inodes 00:33:03.097 00:33:03.097 Allocating group tables: 0/1 done 00:33:03.097 Writing inode tables: 0/1 done 00:33:03.097 Creating journal (1024 blocks): done 00:33:03.097 Writing superblocks and filesystem accounting information: 0/1 done 00:33:03.097 00:33:03.097 18:47:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:33:03.097 18:47:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:03.097 18:47:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:03.097 18:47:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:03.097 18:47:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:03.097 18:47:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:33:03.097 18:47:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:03.097 18:47:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:03.355 18:47:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:03.355 18:47:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:03.355 18:47:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:03.355 18:47:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:03.355 18:47:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:03.355 18:47:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:03.355 18:47:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:03.355 18:47:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:03.355 18:47:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:33:03.355 18:47:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:33:03.355 18:47:48 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 2990654 00:33:03.355 18:47:48 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 2990654 ']' 00:33:03.355 18:47:48 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 2990654 00:33:03.355 18:47:48 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:33:03.355 18:47:48 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:03.355 18:47:48 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2990654 00:33:03.612 18:47:48 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:03.612 18:47:48 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:03.612 18:47:48 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2990654' 00:33:03.613 killing process with pid 2990654 00:33:03.613 18:47:48 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@967 -- # kill 2990654 00:33:03.613 18:47:48 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@972 -- # wait 2990654 00:33:03.613 18:47:49 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:33:03.613 00:33:03.613 real 0m6.200s 00:33:03.613 user 0m9.520s 00:33:03.613 sys 0m1.865s 00:33:03.613 18:47:49 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:03.613 18:47:49 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:33:03.613 ************************************ 00:33:03.613 END TEST bdev_nbd 00:33:03.613 ************************************ 00:33:03.613 18:47:49 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:33:03.613 18:47:49 blockdev_crypto_sw -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:33:03.613 18:47:49 blockdev_crypto_sw -- bdev/blockdev.sh@764 -- # '[' crypto_sw = nvme ']' 00:33:03.613 18:47:49 blockdev_crypto_sw -- bdev/blockdev.sh@764 -- # '[' crypto_sw = gpt ']' 00:33:03.613 18:47:49 blockdev_crypto_sw -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:33:03.613 18:47:49 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:03.613 18:47:49 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:03.613 18:47:49 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:33:03.613 ************************************ 00:33:03.613 START TEST bdev_fio 00:33:03.613 ************************************ 00:33:03.613 18:47:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:33:03.613 18:47:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:33:03.613 18:47:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:33:03.613 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:33:03.613 18:47:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:33:03.613 18:47:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:33:03.613 18:47:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:33:03.613 18:47:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:33:03.613 18:47:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:33:03.613 18:47:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:33:03.613 18:47:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram]' 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram3]' 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram3 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:33:03.871 ************************************ 00:33:03.871 START TEST bdev_fio_rw_verify 00:33:03.871 ************************************ 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:03.871 18:47:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:04.129 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:04.129 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:04.129 fio-3.35 00:33:04.129 Starting 2 threads 00:33:16.328 00:33:16.328 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=2991959: Mon Jul 15 18:48:00 2024 00:33:16.328 read: IOPS=20.1k, BW=78.6MiB/s (82.4MB/s)(786MiB/10001msec) 00:33:16.328 slat (nsec): min=15603, max=81167, avg=21604.67, stdev=3682.30 00:33:16.328 clat (usec): min=7, max=408, avg=157.93, stdev=62.73 00:33:16.328 lat (usec): min=28, max=434, avg=179.54, stdev=64.18 00:33:16.328 clat percentiles (usec): 00:33:16.328 | 50.000th=[ 155], 99.000th=[ 302], 99.900th=[ 322], 99.990th=[ 363], 00:33:16.328 | 99.999th=[ 400] 00:33:16.328 write: IOPS=24.2k, BW=94.4MiB/s (99.0MB/s)(894MiB/9475msec); 0 zone resets 00:33:16.328 slat (usec): min=15, max=1515, avg=36.68, stdev= 5.36 00:33:16.328 clat (usec): min=28, max=1821, avg=212.21, stdev=96.68 00:33:16.328 lat (usec): min=58, max=1850, avg=248.89, stdev=98.36 00:33:16.328 clat percentiles (usec): 00:33:16.328 | 50.000th=[ 206], 99.000th=[ 420], 99.900th=[ 441], 99.990th=[ 562], 00:33:16.328 | 99.999th=[ 1762] 00:33:16.328 bw ( KiB/s): min=86512, max=98320, per=94.95%, avg=91777.26, stdev=1751.57, samples=38 00:33:16.328 iops : min=21628, max=24580, avg=22944.32, stdev=437.89, samples=38 00:33:16.328 lat (usec) : 10=0.01%, 20=0.01%, 50=2.00%, 100=14.53%, 250=60.00% 00:33:16.328 lat (usec) : 500=23.46%, 750=0.01%, 1000=0.01% 00:33:16.328 lat (msec) : 2=0.01% 00:33:16.328 cpu : usr=99.64%, sys=0.00%, ctx=37, majf=0, minf=472 00:33:16.328 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:16.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:16.328 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:16.328 issued rwts: total=201162,228963,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:16.328 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:16.328 00:33:16.328 Run status group 0 (all jobs): 00:33:16.328 READ: bw=78.6MiB/s (82.4MB/s), 78.6MiB/s-78.6MiB/s (82.4MB/s-82.4MB/s), io=786MiB (824MB), run=10001-10001msec 00:33:16.328 WRITE: bw=94.4MiB/s (99.0MB/s), 94.4MiB/s-94.4MiB/s (99.0MB/s-99.0MB/s), io=894MiB (938MB), run=9475-9475msec 00:33:16.328 00:33:16.328 real 0m11.069s 00:33:16.328 user 0m26.286s 00:33:16.328 sys 0m0.288s 00:33:16.328 18:48:00 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:16.328 18:48:00 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:33:16.328 ************************************ 00:33:16.328 END TEST bdev_fio_rw_verify 00:33:16.328 ************************************ 00:33:16.328 18:48:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:33:16.328 18:48:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:33:16.328 18:48:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:33:16.328 18:48:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:33:16.328 18:48:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:33:16.328 18:48:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:33:16.328 18:48:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:33:16.328 18:48:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:33:16.328 18:48:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:33:16.328 18:48:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:33:16.328 18:48:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:33:16.328 18:48:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:33:16.328 18:48:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:33:16.328 18:48:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:33:16.328 18:48:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:33:16.328 18:48:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:33:16.328 18:48:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:33:16.328 18:48:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:33:16.328 18:48:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "dc23b25c-f815-53b4-8b0b-f262f75bfe65"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "dc23b25c-f815-53b4-8b0b-f262f75bfe65",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "c5fb77b7-ae20-5023-8580-bff8cac6f7b1"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "c5fb77b7-ae20-5023-8580-bff8cac6f7b1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:33:16.328 18:48:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n crypto_ram 00:33:16.328 crypto_ram3 ]] 00:33:16.328 18:48:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:33:16.328 18:48:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "dc23b25c-f815-53b4-8b0b-f262f75bfe65"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "dc23b25c-f815-53b4-8b0b-f262f75bfe65",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "c5fb77b7-ae20-5023-8580-bff8cac6f7b1"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "c5fb77b7-ae20-5023-8580-bff8cac6f7b1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:33:16.328 18:48:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:33:16.328 18:48:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram]' 00:33:16.328 18:48:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram 00:33:16.328 18:48:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:33:16.328 18:48:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram3]' 00:33:16.328 18:48:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram3 00:33:16.328 18:48:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:16.328 18:48:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:33:16.328 18:48:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:16.329 18:48:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:33:16.329 ************************************ 00:33:16.329 START TEST bdev_fio_trim 00:33:16.329 ************************************ 00:33:16.329 18:48:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:16.329 18:48:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:16.329 18:48:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:16.329 18:48:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:16.329 18:48:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:16.329 18:48:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:33:16.329 18:48:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:33:16.329 18:48:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:16.329 18:48:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:16.329 18:48:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:33:16.329 18:48:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:33:16.329 18:48:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:16.329 18:48:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:16.329 18:48:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:16.329 18:48:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:16.329 18:48:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:16.329 18:48:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:33:16.329 18:48:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:16.329 18:48:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:16.329 18:48:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:16.329 18:48:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:16.329 18:48:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:16.329 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:16.329 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:16.329 fio-3.35 00:33:16.329 Starting 2 threads 00:33:26.300 00:33:26.300 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=2993845: Mon Jul 15 18:48:11 2024 00:33:26.300 write: IOPS=28.3k, BW=111MiB/s (116MB/s)(1107MiB/10001msec); 0 zone resets 00:33:26.300 slat (usec): min=11, max=1462, avg=30.99, stdev=10.30 00:33:26.300 clat (usec): min=41, max=1783, avg=232.39, stdev=130.39 00:33:26.300 lat (usec): min=56, max=1852, avg=263.38, stdev=137.37 00:33:26.300 clat percentiles (usec): 00:33:26.300 | 50.000th=[ 215], 99.000th=[ 537], 99.900th=[ 578], 99.990th=[ 766], 00:33:26.300 | 99.999th=[ 963] 00:33:26.300 bw ( KiB/s): min=83688, max=154848, per=100.00%, avg=114243.79, stdev=13244.82, samples=38 00:33:26.300 iops : min=20922, max=38712, avg=28560.95, stdev=3311.20, samples=38 00:33:26.300 trim: IOPS=28.3k, BW=111MiB/s (116MB/s)(1107MiB/10001msec); 0 zone resets 00:33:26.300 slat (usec): min=4, max=136, avg=14.20, stdev= 5.05 00:33:26.300 clat (usec): min=27, max=759, avg=155.05, stdev=61.85 00:33:26.300 lat (usec): min=33, max=776, avg=169.25, stdev=63.93 00:33:26.300 clat percentiles (usec): 00:33:26.300 | 50.000th=[ 145], 99.000th=[ 306], 99.900th=[ 379], 99.990th=[ 408], 00:33:26.300 | 99.999th=[ 652] 00:33:26.300 bw ( KiB/s): min=83688, max=154848, per=100.00%, avg=114245.05, stdev=13245.76, samples=38 00:33:26.300 iops : min=20922, max=38712, avg=28561.26, stdev=3311.44, samples=38 00:33:26.300 lat (usec) : 50=1.02%, 100=16.99%, 250=57.32%, 500=22.83%, 750=1.84% 00:33:26.300 lat (usec) : 1000=0.01% 00:33:26.300 lat (msec) : 2=0.01% 00:33:26.300 cpu : usr=99.61%, sys=0.00%, ctx=33, majf=0, minf=381 00:33:26.300 IO depths : 1=7.2%, 2=17.1%, 4=60.6%, 8=15.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:26.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:26.300 complete : 0=0.0%, 4=86.8%, 8=13.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:26.300 issued rwts: total=0,283297,283297,0 short=0,0,0,0 dropped=0,0,0,0 00:33:26.300 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:26.300 00:33:26.300 Run status group 0 (all jobs): 00:33:26.300 WRITE: bw=111MiB/s (116MB/s), 111MiB/s-111MiB/s (116MB/s-116MB/s), io=1107MiB (1160MB), run=10001-10001msec 00:33:26.300 TRIM: bw=111MiB/s (116MB/s), 111MiB/s-111MiB/s (116MB/s-116MB/s), io=1107MiB (1160MB), run=10001-10001msec 00:33:26.300 00:33:26.300 real 0m11.064s 00:33:26.300 user 0m26.355s 00:33:26.300 sys 0m0.279s 00:33:26.300 18:48:11 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:26.300 18:48:11 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:33:26.300 ************************************ 00:33:26.300 END TEST bdev_fio_trim 00:33:26.300 ************************************ 00:33:26.300 18:48:11 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:33:26.300 18:48:11 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:33:26.300 18:48:11 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:33:26.300 18:48:11 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:33:26.300 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:33:26.300 18:48:11 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:33:26.300 00:33:26.300 real 0m22.395s 00:33:26.300 user 0m52.801s 00:33:26.300 sys 0m0.687s 00:33:26.300 18:48:11 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:26.300 18:48:11 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:33:26.300 ************************************ 00:33:26.300 END TEST bdev_fio 00:33:26.300 ************************************ 00:33:26.300 18:48:11 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:33:26.300 18:48:11 blockdev_crypto_sw -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:26.300 18:48:11 blockdev_crypto_sw -- bdev/blockdev.sh@777 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:33:26.300 18:48:11 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:33:26.300 18:48:11 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:26.300 18:48:11 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:33:26.300 ************************************ 00:33:26.300 START TEST bdev_verify 00:33:26.300 ************************************ 00:33:26.300 18:48:11 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:33:26.300 [2024-07-15 18:48:11.650166] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:33:26.300 [2024-07-15 18:48:11.650227] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2995826 ] 00:33:26.300 [2024-07-15 18:48:11.751967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:26.300 [2024-07-15 18:48:11.846248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:26.300 [2024-07-15 18:48:11.846252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.559 [2024-07-15 18:48:12.010718] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:33:26.559 [2024-07-15 18:48:12.010779] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:33:26.559 [2024-07-15 18:48:12.010791] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:26.559 [2024-07-15 18:48:12.018741] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:33:26.559 [2024-07-15 18:48:12.018758] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:33:26.559 [2024-07-15 18:48:12.018767] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:26.559 [2024-07-15 18:48:12.026765] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:33:26.559 [2024-07-15 18:48:12.026780] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:33:26.559 [2024-07-15 18:48:12.026793] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:26.559 Running I/O for 5 seconds... 00:33:31.882 00:33:31.882 Latency(us) 00:33:31.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:31.882 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:31.882 Verification LBA range: start 0x0 length 0x800 00:33:31.882 crypto_ram : 5.02 5606.03 21.90 0.00 0.00 22742.68 2106.51 24966.10 00:33:31.882 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:33:31.882 Verification LBA range: start 0x800 length 0x800 00:33:31.882 crypto_ram : 5.02 4382.04 17.12 0.00 0.00 29086.88 2434.19 29335.16 00:33:31.882 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:31.882 Verification LBA range: start 0x0 length 0x800 00:33:31.882 crypto_ram3 : 5.03 2801.59 10.94 0.00 0.00 45440.21 9237.46 30333.81 00:33:31.882 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:33:31.883 Verification LBA range: start 0x800 length 0x800 00:33:31.883 crypto_ram3 : 5.03 2189.50 8.55 0.00 0.00 58073.82 11359.57 36949.82 00:33:31.883 =================================================================================================================== 00:33:31.883 Total : 14979.17 58.51 0.00 0.00 34012.08 2106.51 36949.82 00:33:31.883 00:33:31.883 real 0m5.741s 00:33:31.883 user 0m10.870s 00:33:31.883 sys 0m0.196s 00:33:31.883 18:48:17 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:31.883 18:48:17 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:33:31.883 ************************************ 00:33:31.883 END TEST bdev_verify 00:33:31.883 ************************************ 00:33:31.883 18:48:17 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:33:31.883 18:48:17 blockdev_crypto_sw -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:33:31.883 18:48:17 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:33:31.883 18:48:17 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:31.883 18:48:17 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:33:31.883 ************************************ 00:33:31.883 START TEST bdev_verify_big_io 00:33:31.883 ************************************ 00:33:31.883 18:48:17 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:33:31.883 [2024-07-15 18:48:17.432739] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:33:31.883 [2024-07-15 18:48:17.432801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2996735 ] 00:33:32.142 [2024-07-15 18:48:17.532483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:32.142 [2024-07-15 18:48:17.629879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:32.142 [2024-07-15 18:48:17.629885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.401 [2024-07-15 18:48:17.792835] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:33:32.401 [2024-07-15 18:48:17.792894] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:33:32.401 [2024-07-15 18:48:17.792907] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:32.401 [2024-07-15 18:48:17.800857] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:33:32.401 [2024-07-15 18:48:17.800879] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:33:32.401 [2024-07-15 18:48:17.800887] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:32.401 [2024-07-15 18:48:17.808879] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:33:32.401 [2024-07-15 18:48:17.808894] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:33:32.401 [2024-07-15 18:48:17.808902] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:32.401 Running I/O for 5 seconds... 00:33:38.961 00:33:38.961 Latency(us) 00:33:38.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.961 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:33:38.961 Verification LBA range: start 0x0 length 0x80 00:33:38.961 crypto_ram : 5.14 398.68 24.92 0.00 0.00 312283.31 6179.11 457378.86 00:33:38.961 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:33:38.961 Verification LBA range: start 0x80 length 0x80 00:33:38.961 crypto_ram : 5.10 275.97 17.25 0.00 0.00 447211.63 8238.81 583207.98 00:33:38.961 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:33:38.961 Verification LBA range: start 0x0 length 0x80 00:33:38.961 crypto_ram3 : 5.39 213.82 13.36 0.00 0.00 560366.09 6616.02 471359.88 00:33:38.961 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:33:38.961 Verification LBA range: start 0x80 length 0x80 00:33:38.961 crypto_ram3 : 5.39 166.09 10.38 0.00 0.00 700972.08 7739.49 607175.44 00:33:38.961 =================================================================================================================== 00:33:38.961 Total : 1054.55 65.91 0.00 0.00 461999.08 6179.11 607175.44 00:33:38.961 00:33:38.961 real 0m6.116s 00:33:38.961 user 0m11.602s 00:33:38.961 sys 0m0.213s 00:33:38.961 18:48:23 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:38.961 18:48:23 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:33:38.961 ************************************ 00:33:38.961 END TEST bdev_verify_big_io 00:33:38.961 ************************************ 00:33:38.961 18:48:23 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:33:38.961 18:48:23 blockdev_crypto_sw -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:38.961 18:48:23 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:33:38.961 18:48:23 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:38.961 18:48:23 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:33:38.961 ************************************ 00:33:38.961 START TEST bdev_write_zeroes 00:33:38.961 ************************************ 00:33:38.961 18:48:23 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:38.961 [2024-07-15 18:48:23.589629] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:33:38.961 [2024-07-15 18:48:23.589688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2997741 ] 00:33:38.961 [2024-07-15 18:48:23.686526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:38.961 [2024-07-15 18:48:23.777632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:38.961 [2024-07-15 18:48:23.942119] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:33:38.961 [2024-07-15 18:48:23.942188] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:33:38.961 [2024-07-15 18:48:23.942205] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:38.962 [2024-07-15 18:48:23.950138] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:33:38.962 [2024-07-15 18:48:23.950154] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:33:38.962 [2024-07-15 18:48:23.950163] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:38.962 [2024-07-15 18:48:23.958161] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:33:38.962 [2024-07-15 18:48:23.958176] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:33:38.962 [2024-07-15 18:48:23.958184] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:38.962 Running I/O for 1 seconds... 00:33:39.526 00:33:39.526 Latency(us) 00:33:39.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.526 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:39.526 crypto_ram : 1.01 24061.36 93.99 0.00 0.00 5305.88 1404.34 7271.38 00:33:39.526 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:39.526 crypto_ram3 : 1.01 12004.26 46.89 0.00 0.00 10583.48 6522.39 10922.67 00:33:39.526 =================================================================================================================== 00:33:39.526 Total : 36065.62 140.88 0.00 0.00 7065.08 1404.34 10922.67 00:33:39.784 00:33:39.784 real 0m1.691s 00:33:39.784 user 0m1.483s 00:33:39.784 sys 0m0.191s 00:33:39.784 18:48:25 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:39.784 18:48:25 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:33:39.784 ************************************ 00:33:39.784 END TEST bdev_write_zeroes 00:33:39.784 ************************************ 00:33:39.784 18:48:25 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:33:39.785 18:48:25 blockdev_crypto_sw -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:39.785 18:48:25 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:33:39.785 18:48:25 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:39.785 18:48:25 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:33:39.785 ************************************ 00:33:39.785 START TEST bdev_json_nonenclosed 00:33:39.785 ************************************ 00:33:39.785 18:48:25 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:39.785 [2024-07-15 18:48:25.322419] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:33:39.785 [2024-07-15 18:48:25.322477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2997976 ] 00:33:40.043 [2024-07-15 18:48:25.418890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.043 [2024-07-15 18:48:25.509615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.043 [2024-07-15 18:48:25.509678] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:33:40.043 [2024-07-15 18:48:25.509695] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:33:40.043 [2024-07-15 18:48:25.509704] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:40.302 00:33:40.302 real 0m0.338s 00:33:40.302 user 0m0.219s 00:33:40.302 sys 0m0.117s 00:33:40.302 18:48:25 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:33:40.302 18:48:25 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:40.302 18:48:25 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:33:40.302 ************************************ 00:33:40.302 END TEST bdev_json_nonenclosed 00:33:40.302 ************************************ 00:33:40.302 18:48:25 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 234 00:33:40.302 18:48:25 blockdev_crypto_sw -- bdev/blockdev.sh@782 -- # true 00:33:40.302 18:48:25 blockdev_crypto_sw -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:40.302 18:48:25 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:33:40.302 18:48:25 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:40.302 18:48:25 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:33:40.302 ************************************ 00:33:40.302 START TEST bdev_json_nonarray 00:33:40.302 ************************************ 00:33:40.302 18:48:25 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:40.302 [2024-07-15 18:48:25.699422] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:33:40.302 [2024-07-15 18:48:25.699479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2997995 ] 00:33:40.302 [2024-07-15 18:48:25.795381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.561 [2024-07-15 18:48:25.887167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.561 [2024-07-15 18:48:25.887237] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:33:40.561 [2024-07-15 18:48:25.887253] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:33:40.561 [2024-07-15 18:48:25.887262] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:40.561 00:33:40.561 real 0m0.335s 00:33:40.561 user 0m0.221s 00:33:40.561 sys 0m0.112s 00:33:40.561 18:48:25 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:33:40.561 18:48:25 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:40.561 18:48:25 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:33:40.561 ************************************ 00:33:40.561 END TEST bdev_json_nonarray 00:33:40.561 ************************************ 00:33:40.561 18:48:26 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 234 00:33:40.561 18:48:26 blockdev_crypto_sw -- bdev/blockdev.sh@785 -- # true 00:33:40.561 18:48:26 blockdev_crypto_sw -- bdev/blockdev.sh@787 -- # [[ crypto_sw == bdev ]] 00:33:40.561 18:48:26 blockdev_crypto_sw -- bdev/blockdev.sh@794 -- # [[ crypto_sw == gpt ]] 00:33:40.561 18:48:26 blockdev_crypto_sw -- bdev/blockdev.sh@798 -- # [[ crypto_sw == crypto_sw ]] 00:33:40.561 18:48:26 blockdev_crypto_sw -- bdev/blockdev.sh@799 -- # run_test bdev_crypto_enomem bdev_crypto_enomem 00:33:40.561 18:48:26 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:40.561 18:48:26 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:40.561 18:48:26 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:33:40.561 ************************************ 00:33:40.561 START TEST bdev_crypto_enomem 00:33:40.561 ************************************ 00:33:40.561 18:48:26 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1123 -- # bdev_crypto_enomem 00:33:40.561 18:48:26 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@635 -- # local base_dev=base0 00:33:40.561 18:48:26 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@636 -- # local test_dev=crypt0 00:33:40.561 18:48:26 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@637 -- # local err_dev=EE_base0 00:33:40.561 18:48:26 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@638 -- # local qd=32 00:33:40.561 18:48:26 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@641 -- # ERR_PID=2998108 00:33:40.561 18:48:26 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@642 -- # trap 'cleanup; killprocess $ERR_PID; exit 1' SIGINT SIGTERM EXIT 00:33:40.561 18:48:26 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@640 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 32 -o 4096 -w randwrite -t 5 -f '' 00:33:40.561 18:48:26 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@643 -- # waitforlisten 2998108 00:33:40.561 18:48:26 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@829 -- # '[' -z 2998108 ']' 00:33:40.561 18:48:26 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:40.561 18:48:26 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:40.561 18:48:26 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:40.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:40.561 18:48:26 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:40.561 18:48:26 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:33:40.561 [2024-07-15 18:48:26.081262] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:33:40.561 [2024-07-15 18:48:26.081322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2998108 ] 00:33:40.821 [2024-07-15 18:48:26.184506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.821 [2024-07-15 18:48:26.295993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:41.757 18:48:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:41.757 18:48:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@862 -- # return 0 00:33:41.757 18:48:27 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@645 -- # rpc_cmd 00:33:41.757 18:48:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.757 18:48:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:33:41.757 true 00:33:41.757 base0 00:33:41.757 true 00:33:41.757 [2024-07-15 18:48:27.064374] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:33:41.757 crypt0 00:33:41.757 18:48:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.757 18:48:27 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@652 -- # waitforbdev crypt0 00:33:41.757 18:48:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@897 -- # local bdev_name=crypt0 00:33:41.757 18:48:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:41.757 18:48:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@899 -- # local i 00:33:41.757 18:48:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:41.757 18:48:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:41.757 18:48:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:33:41.757 18:48:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.757 18:48:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:33:41.757 18:48:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.757 18:48:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b crypt0 -t 2000 00:33:41.757 18:48:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.757 18:48:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:33:41.757 [ 00:33:41.757 { 00:33:41.757 "name": "crypt0", 00:33:41.757 "aliases": [ 00:33:41.757 "2d09f145-f7c1-5f76-8011-6a91a4e5c63e" 00:33:41.757 ], 00:33:41.757 "product_name": "crypto", 00:33:41.757 "block_size": 512, 00:33:41.757 "num_blocks": 2097152, 00:33:41.757 "uuid": "2d09f145-f7c1-5f76-8011-6a91a4e5c63e", 00:33:41.757 "assigned_rate_limits": { 00:33:41.757 "rw_ios_per_sec": 0, 00:33:41.757 "rw_mbytes_per_sec": 0, 00:33:41.757 "r_mbytes_per_sec": 0, 00:33:41.757 "w_mbytes_per_sec": 0 00:33:41.757 }, 00:33:41.757 "claimed": false, 00:33:41.757 "zoned": false, 00:33:41.757 "supported_io_types": { 00:33:41.757 "read": true, 00:33:41.757 "write": true, 00:33:41.757 "unmap": false, 00:33:41.757 "flush": false, 00:33:41.757 "reset": true, 00:33:41.757 "nvme_admin": false, 00:33:41.757 "nvme_io": false, 00:33:41.757 "nvme_io_md": false, 00:33:41.757 "write_zeroes": true, 00:33:41.757 "zcopy": false, 00:33:41.758 "get_zone_info": false, 00:33:41.758 "zone_management": false, 00:33:41.758 "zone_append": false, 00:33:41.758 "compare": false, 00:33:41.758 "compare_and_write": false, 00:33:41.758 "abort": false, 00:33:41.758 "seek_hole": false, 00:33:41.758 "seek_data": false, 00:33:41.758 "copy": false, 00:33:41.758 "nvme_iov_md": false 00:33:41.758 }, 00:33:41.758 "memory_domains": [ 00:33:41.758 { 00:33:41.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:41.758 "dma_device_type": 2 00:33:41.758 } 00:33:41.758 ], 00:33:41.758 "driver_specific": { 00:33:41.758 "crypto": { 00:33:41.758 "base_bdev_name": "EE_base0", 00:33:41.758 "name": "crypt0", 00:33:41.758 "key_name": "test_dek_sw" 00:33:41.758 } 00:33:41.758 } 00:33:41.758 } 00:33:41.758 ] 00:33:41.758 18:48:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.758 18:48:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@905 -- # return 0 00:33:41.758 18:48:27 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@655 -- # rpcpid=2998231 00:33:41.758 18:48:27 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@657 -- # sleep 1 00:33:41.758 18:48:27 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@654 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:33:41.758 Running I/O for 5 seconds... 00:33:42.694 18:48:28 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_error_inject_error EE_base0 -n 5 -q 31 write nomem 00:33:42.694 18:48:28 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.694 18:48:28 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:33:42.694 18:48:28 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.694 18:48:28 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@660 -- # wait 2998231 00:33:46.882 00:33:46.882 Latency(us) 00:33:46.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:46.882 Job: crypt0 (Core Mask 0x2, workload: randwrite, depth: 32, IO size: 4096) 00:33:46.882 crypt0 : 5.00 24965.84 97.52 0.00 0.00 1276.23 592.94 2044.10 00:33:46.882 =================================================================================================================== 00:33:46.882 Total : 24965.84 97.52 0.00 0.00 1276.23 592.94 2044.10 00:33:46.882 0 00:33:46.882 18:48:32 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@662 -- # rpc_cmd bdev_crypto_delete crypt0 00:33:46.882 18:48:32 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:46.882 18:48:32 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:33:46.882 18:48:32 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:46.882 18:48:32 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@664 -- # killprocess 2998108 00:33:46.882 18:48:32 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@948 -- # '[' -z 2998108 ']' 00:33:46.883 18:48:32 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@952 -- # kill -0 2998108 00:33:46.883 18:48:32 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@953 -- # uname 00:33:46.883 18:48:32 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:46.883 18:48:32 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2998108 00:33:46.883 18:48:32 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:46.883 18:48:32 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:46.883 18:48:32 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2998108' 00:33:46.883 killing process with pid 2998108 00:33:46.883 18:48:32 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@967 -- # kill 2998108 00:33:46.883 Received shutdown signal, test time was about 5.000000 seconds 00:33:46.883 00:33:46.883 Latency(us) 00:33:46.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:46.883 =================================================================================================================== 00:33:46.883 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:46.883 18:48:32 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@972 -- # wait 2998108 00:33:47.142 18:48:32 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@665 -- # trap - SIGINT SIGTERM EXIT 00:33:47.142 00:33:47.142 real 0m6.544s 00:33:47.142 user 0m6.859s 00:33:47.142 sys 0m0.333s 00:33:47.142 18:48:32 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:47.142 18:48:32 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:33:47.142 ************************************ 00:33:47.142 END TEST bdev_crypto_enomem 00:33:47.142 ************************************ 00:33:47.142 18:48:32 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:33:47.142 18:48:32 blockdev_crypto_sw -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:33:47.142 18:48:32 blockdev_crypto_sw -- bdev/blockdev.sh@811 -- # cleanup 00:33:47.142 18:48:32 blockdev_crypto_sw -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:33:47.142 18:48:32 blockdev_crypto_sw -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:33:47.142 18:48:32 blockdev_crypto_sw -- bdev/blockdev.sh@26 -- # [[ crypto_sw == rbd ]] 00:33:47.142 18:48:32 blockdev_crypto_sw -- bdev/blockdev.sh@30 -- # [[ crypto_sw == daos ]] 00:33:47.142 18:48:32 blockdev_crypto_sw -- bdev/blockdev.sh@34 -- # [[ crypto_sw = \g\p\t ]] 00:33:47.142 18:48:32 blockdev_crypto_sw -- bdev/blockdev.sh@40 -- # [[ crypto_sw == xnvme ]] 00:33:47.142 00:33:47.142 real 0m53.935s 00:33:47.142 user 1m40.441s 00:33:47.142 sys 0m5.060s 00:33:47.142 18:48:32 blockdev_crypto_sw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:47.142 18:48:32 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:33:47.142 ************************************ 00:33:47.142 END TEST blockdev_crypto_sw 00:33:47.142 ************************************ 00:33:47.142 18:48:32 -- common/autotest_common.sh@1142 -- # return 0 00:33:47.142 18:48:32 -- spdk/autotest.sh@359 -- # run_test blockdev_crypto_qat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:33:47.142 18:48:32 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:47.142 18:48:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:47.142 18:48:32 -- common/autotest_common.sh@10 -- # set +x 00:33:47.142 ************************************ 00:33:47.142 START TEST blockdev_crypto_qat 00:33:47.142 ************************************ 00:33:47.142 18:48:32 blockdev_crypto_qat -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:33:47.401 * Looking for test storage... 00:33:47.401 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:33:47.401 18:48:32 blockdev_crypto_qat -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:33:47.401 18:48:32 blockdev_crypto_qat -- bdev/nbd_common.sh@6 -- # set -e 00:33:47.401 18:48:32 blockdev_crypto_qat -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:33:47.401 18:48:32 blockdev_crypto_qat -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:33:47.401 18:48:32 blockdev_crypto_qat -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:33:47.401 18:48:32 blockdev_crypto_qat -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:33:47.401 18:48:32 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:33:47.401 18:48:32 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:33:47.401 18:48:32 blockdev_crypto_qat -- bdev/blockdev.sh@20 -- # : 00:33:47.401 18:48:32 blockdev_crypto_qat -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:33:47.401 18:48:32 blockdev_crypto_qat -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:33:47.401 18:48:32 blockdev_crypto_qat -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:33:47.401 18:48:32 blockdev_crypto_qat -- bdev/blockdev.sh@674 -- # uname -s 00:33:47.401 18:48:32 blockdev_crypto_qat -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:33:47.401 18:48:32 blockdev_crypto_qat -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:33:47.401 18:48:32 blockdev_crypto_qat -- bdev/blockdev.sh@682 -- # test_type=crypto_qat 00:33:47.401 18:48:32 blockdev_crypto_qat -- bdev/blockdev.sh@683 -- # crypto_device= 00:33:47.401 18:48:32 blockdev_crypto_qat -- bdev/blockdev.sh@684 -- # dek= 00:33:47.401 18:48:32 blockdev_crypto_qat -- bdev/blockdev.sh@685 -- # env_ctx= 00:33:47.401 18:48:32 blockdev_crypto_qat -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:33:47.401 18:48:32 blockdev_crypto_qat -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:33:47.401 18:48:32 blockdev_crypto_qat -- bdev/blockdev.sh@690 -- # [[ crypto_qat == bdev ]] 00:33:47.401 18:48:32 blockdev_crypto_qat -- bdev/blockdev.sh@690 -- # [[ crypto_qat == crypto_* ]] 00:33:47.401 18:48:32 blockdev_crypto_qat -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:33:47.401 18:48:32 blockdev_crypto_qat -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:33:47.401 18:48:32 blockdev_crypto_qat -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=2999138 00:33:47.401 18:48:32 blockdev_crypto_qat -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:33:47.401 18:48:32 blockdev_crypto_qat -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:33:47.401 18:48:32 blockdev_crypto_qat -- bdev/blockdev.sh@49 -- # waitforlisten 2999138 00:33:47.401 18:48:32 blockdev_crypto_qat -- common/autotest_common.sh@829 -- # '[' -z 2999138 ']' 00:33:47.401 18:48:32 blockdev_crypto_qat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:47.401 18:48:32 blockdev_crypto_qat -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:47.402 18:48:32 blockdev_crypto_qat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:47.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:47.402 18:48:32 blockdev_crypto_qat -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:47.402 18:48:32 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:33:47.402 [2024-07-15 18:48:32.871044] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:33:47.402 [2024-07-15 18:48:32.871160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2999138 ] 00:33:47.660 [2024-07-15 18:48:33.006224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:47.660 [2024-07-15 18:48:33.098412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:48.223 18:48:33 blockdev_crypto_qat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:48.223 18:48:33 blockdev_crypto_qat -- common/autotest_common.sh@862 -- # return 0 00:33:48.223 18:48:33 blockdev_crypto_qat -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:33:48.223 18:48:33 blockdev_crypto_qat -- bdev/blockdev.sh@708 -- # setup_crypto_qat_conf 00:33:48.223 18:48:33 blockdev_crypto_qat -- bdev/blockdev.sh@170 -- # rpc_cmd 00:33:48.223 18:48:33 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.223 18:48:33 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:33:48.482 [2024-07-15 18:48:33.780532] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:33:48.482 [2024-07-15 18:48:33.788570] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:33:48.482 [2024-07-15 18:48:33.796589] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:33:48.482 [2024-07-15 18:48:33.862852] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:33:51.028 true 00:33:51.028 true 00:33:51.028 true 00:33:51.028 true 00:33:51.028 Malloc0 00:33:51.028 Malloc1 00:33:51.028 Malloc2 00:33:51.028 Malloc3 00:33:51.028 [2024-07-15 18:48:36.188866] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:33:51.028 crypto_ram 00:33:51.028 [2024-07-15 18:48:36.196889] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:33:51.028 crypto_ram1 00:33:51.028 [2024-07-15 18:48:36.204908] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:33:51.028 crypto_ram2 00:33:51.028 [2024-07-15 18:48:36.212932] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:33:51.028 crypto_ram3 00:33:51.028 [ 00:33:51.028 { 00:33:51.028 "name": "Malloc1", 00:33:51.028 "aliases": [ 00:33:51.028 "707083da-c7b9-4386-a1a4-120aff213264" 00:33:51.028 ], 00:33:51.028 "product_name": "Malloc disk", 00:33:51.028 "block_size": 512, 00:33:51.028 "num_blocks": 65536, 00:33:51.028 "uuid": "707083da-c7b9-4386-a1a4-120aff213264", 00:33:51.028 "assigned_rate_limits": { 00:33:51.028 "rw_ios_per_sec": 0, 00:33:51.028 "rw_mbytes_per_sec": 0, 00:33:51.028 "r_mbytes_per_sec": 0, 00:33:51.028 "w_mbytes_per_sec": 0 00:33:51.028 }, 00:33:51.028 "claimed": true, 00:33:51.028 "claim_type": "exclusive_write", 00:33:51.028 "zoned": false, 00:33:51.028 "supported_io_types": { 00:33:51.028 "read": true, 00:33:51.028 "write": true, 00:33:51.028 "unmap": true, 00:33:51.028 "flush": true, 00:33:51.028 "reset": true, 00:33:51.028 "nvme_admin": false, 00:33:51.028 "nvme_io": false, 00:33:51.028 "nvme_io_md": false, 00:33:51.028 "write_zeroes": true, 00:33:51.028 "zcopy": true, 00:33:51.028 "get_zone_info": false, 00:33:51.028 "zone_management": false, 00:33:51.028 "zone_append": false, 00:33:51.028 "compare": false, 00:33:51.028 "compare_and_write": false, 00:33:51.028 "abort": true, 00:33:51.028 "seek_hole": false, 00:33:51.028 "seek_data": false, 00:33:51.028 "copy": true, 00:33:51.028 "nvme_iov_md": false 00:33:51.028 }, 00:33:51.028 "memory_domains": [ 00:33:51.028 { 00:33:51.028 "dma_device_id": "system", 00:33:51.028 "dma_device_type": 1 00:33:51.028 }, 00:33:51.028 { 00:33:51.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:51.028 "dma_device_type": 2 00:33:51.028 } 00:33:51.028 ], 00:33:51.028 "driver_specific": {} 00:33:51.028 } 00:33:51.028 ] 00:33:51.028 18:48:36 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.028 18:48:36 blockdev_crypto_qat -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:33:51.028 18:48:36 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.028 18:48:36 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:33:51.028 18:48:36 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.028 18:48:36 blockdev_crypto_qat -- bdev/blockdev.sh@740 -- # cat 00:33:51.028 18:48:36 blockdev_crypto_qat -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:33:51.028 18:48:36 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.028 18:48:36 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:33:51.028 18:48:36 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.028 18:48:36 blockdev_crypto_qat -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:33:51.028 18:48:36 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.028 18:48:36 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:33:51.028 18:48:36 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.028 18:48:36 blockdev_crypto_qat -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:33:51.028 18:48:36 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.028 18:48:36 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:33:51.028 18:48:36 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.028 18:48:36 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:33:51.028 18:48:36 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:33:51.028 18:48:36 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.028 18:48:36 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:33:51.028 18:48:36 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:33:51.028 18:48:36 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.028 18:48:36 blockdev_crypto_qat -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:33:51.028 18:48:36 blockdev_crypto_qat -- bdev/blockdev.sh@749 -- # jq -r .name 00:33:51.028 18:48:36 blockdev_crypto_qat -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "05deb763-e1aa-5f79-b0af-5840d663e0d2"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "05deb763-e1aa-5f79-b0af-5840d663e0d2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "37504b5f-5614-5991-b833-cb40b207f200"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "37504b5f-5614-5991-b833-cb40b207f200",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "8ed3e307-d2ce-52be-8411-68d9c5c875f9"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "8ed3e307-d2ce-52be-8411-68d9c5c875f9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "3aacaf16-52d4-5299-8e2e-a9226a8fe6bb"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "3aacaf16-52d4-5299-8e2e-a9226a8fe6bb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:33:51.028 18:48:36 blockdev_crypto_qat -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:33:51.028 18:48:36 blockdev_crypto_qat -- bdev/blockdev.sh@752 -- # hello_world_bdev=crypto_ram 00:33:51.028 18:48:36 blockdev_crypto_qat -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:33:51.028 18:48:36 blockdev_crypto_qat -- bdev/blockdev.sh@754 -- # killprocess 2999138 00:33:51.028 18:48:36 blockdev_crypto_qat -- common/autotest_common.sh@948 -- # '[' -z 2999138 ']' 00:33:51.028 18:48:36 blockdev_crypto_qat -- common/autotest_common.sh@952 -- # kill -0 2999138 00:33:51.028 18:48:36 blockdev_crypto_qat -- common/autotest_common.sh@953 -- # uname 00:33:51.028 18:48:36 blockdev_crypto_qat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:51.028 18:48:36 blockdev_crypto_qat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2999138 00:33:51.028 18:48:36 blockdev_crypto_qat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:51.028 18:48:36 blockdev_crypto_qat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:51.028 18:48:36 blockdev_crypto_qat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2999138' 00:33:51.028 killing process with pid 2999138 00:33:51.028 18:48:36 blockdev_crypto_qat -- common/autotest_common.sh@967 -- # kill 2999138 00:33:51.028 18:48:36 blockdev_crypto_qat -- common/autotest_common.sh@972 -- # wait 2999138 00:33:51.593 18:48:36 blockdev_crypto_qat -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:51.593 18:48:36 blockdev_crypto_qat -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:33:51.593 18:48:36 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:33:51.593 18:48:36 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:51.593 18:48:36 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:33:51.593 ************************************ 00:33:51.593 START TEST bdev_hello_world 00:33:51.593 ************************************ 00:33:51.593 18:48:36 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:33:51.593 [2024-07-15 18:48:37.040008] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:33:51.593 [2024-07-15 18:48:37.040066] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2999794 ] 00:33:51.593 [2024-07-15 18:48:37.137143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:51.851 [2024-07-15 18:48:37.229024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:51.851 [2024-07-15 18:48:37.250310] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:33:51.851 [2024-07-15 18:48:37.258338] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:33:51.851 [2024-07-15 18:48:37.266357] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:33:51.851 [2024-07-15 18:48:37.385716] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:33:54.381 [2024-07-15 18:48:39.559962] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:33:54.381 [2024-07-15 18:48:39.560032] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:33:54.381 [2024-07-15 18:48:39.560046] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:54.381 [2024-07-15 18:48:39.567984] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:33:54.381 [2024-07-15 18:48:39.568002] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:33:54.382 [2024-07-15 18:48:39.568011] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:54.382 [2024-07-15 18:48:39.576001] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:33:54.382 [2024-07-15 18:48:39.576017] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:33:54.382 [2024-07-15 18:48:39.576025] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:54.382 [2024-07-15 18:48:39.584020] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:33:54.382 [2024-07-15 18:48:39.584034] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:33:54.382 [2024-07-15 18:48:39.584042] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:54.382 [2024-07-15 18:48:39.656614] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:33:54.382 [2024-07-15 18:48:39.656654] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:33:54.382 [2024-07-15 18:48:39.656670] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:33:54.382 [2024-07-15 18:48:39.657988] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:33:54.382 [2024-07-15 18:48:39.658059] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:33:54.382 [2024-07-15 18:48:39.658073] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:33:54.382 [2024-07-15 18:48:39.658116] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:33:54.382 00:33:54.382 [2024-07-15 18:48:39.658132] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:33:54.641 00:33:54.641 real 0m3.008s 00:33:54.641 user 0m2.658s 00:33:54.641 sys 0m0.302s 00:33:54.641 18:48:39 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:54.641 18:48:39 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:33:54.641 ************************************ 00:33:54.641 END TEST bdev_hello_world 00:33:54.641 ************************************ 00:33:54.641 18:48:40 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:33:54.641 18:48:40 blockdev_crypto_qat -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:33:54.641 18:48:40 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:54.641 18:48:40 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:54.641 18:48:40 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:33:54.641 ************************************ 00:33:54.641 START TEST bdev_bounds 00:33:54.641 ************************************ 00:33:54.641 18:48:40 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:33:54.641 18:48:40 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=3000235 00:33:54.641 18:48:40 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:33:54.641 18:48:40 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@289 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:33:54.641 18:48:40 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 3000235' 00:33:54.641 Process bdevio pid: 3000235 00:33:54.641 18:48:40 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 3000235 00:33:54.641 18:48:40 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 3000235 ']' 00:33:54.641 18:48:40 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:54.641 18:48:40 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:54.641 18:48:40 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:54.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:54.641 18:48:40 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:54.641 18:48:40 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:33:54.641 [2024-07-15 18:48:40.091865] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:33:54.641 [2024-07-15 18:48:40.091928] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3000235 ] 00:33:54.899 [2024-07-15 18:48:40.192819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:54.899 [2024-07-15 18:48:40.286464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:54.899 [2024-07-15 18:48:40.286568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:54.899 [2024-07-15 18:48:40.286569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:54.899 [2024-07-15 18:48:40.307986] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:33:54.899 [2024-07-15 18:48:40.316018] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:33:54.899 [2024-07-15 18:48:40.324041] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:33:54.899 [2024-07-15 18:48:40.421946] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:33:57.475 [2024-07-15 18:48:42.595251] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:33:57.475 [2024-07-15 18:48:42.595330] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:33:57.475 [2024-07-15 18:48:42.595343] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:57.475 [2024-07-15 18:48:42.603273] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:33:57.475 [2024-07-15 18:48:42.603290] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:33:57.475 [2024-07-15 18:48:42.603298] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:57.475 [2024-07-15 18:48:42.611299] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:33:57.475 [2024-07-15 18:48:42.611314] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:33:57.475 [2024-07-15 18:48:42.611323] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:57.475 [2024-07-15 18:48:42.619322] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:33:57.475 [2024-07-15 18:48:42.619337] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:33:57.475 [2024-07-15 18:48:42.619346] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:57.475 18:48:42 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:57.475 18:48:42 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:33:57.475 18:48:42 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:33:57.475 I/O targets: 00:33:57.475 crypto_ram: 65536 blocks of 512 bytes (32 MiB) 00:33:57.475 crypto_ram1: 65536 blocks of 512 bytes (32 MiB) 00:33:57.475 crypto_ram2: 8192 blocks of 4096 bytes (32 MiB) 00:33:57.475 crypto_ram3: 8192 blocks of 4096 bytes (32 MiB) 00:33:57.475 00:33:57.475 00:33:57.475 CUnit - A unit testing framework for C - Version 2.1-3 00:33:57.475 http://cunit.sourceforge.net/ 00:33:57.475 00:33:57.475 00:33:57.475 Suite: bdevio tests on: crypto_ram3 00:33:57.475 Test: blockdev write read block ...passed 00:33:57.475 Test: blockdev write zeroes read block ...passed 00:33:57.475 Test: blockdev write zeroes read no split ...passed 00:33:57.475 Test: blockdev write zeroes read split ...passed 00:33:57.475 Test: blockdev write zeroes read split partial ...passed 00:33:57.475 Test: blockdev reset ...passed 00:33:57.475 Test: blockdev write read 8 blocks ...passed 00:33:57.475 Test: blockdev write read size > 128k ...passed 00:33:57.475 Test: blockdev write read invalid size ...passed 00:33:57.475 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:57.475 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:57.475 Test: blockdev write read max offset ...passed 00:33:57.475 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:57.475 Test: blockdev writev readv 8 blocks ...passed 00:33:57.475 Test: blockdev writev readv 30 x 1block ...passed 00:33:57.475 Test: blockdev writev readv block ...passed 00:33:57.475 Test: blockdev writev readv size > 128k ...passed 00:33:57.475 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:57.475 Test: blockdev comparev and writev ...passed 00:33:57.475 Test: blockdev nvme passthru rw ...passed 00:33:57.475 Test: blockdev nvme passthru vendor specific ...passed 00:33:57.475 Test: blockdev nvme admin passthru ...passed 00:33:57.475 Test: blockdev copy ...passed 00:33:57.475 Suite: bdevio tests on: crypto_ram2 00:33:57.475 Test: blockdev write read block ...passed 00:33:57.475 Test: blockdev write zeroes read block ...passed 00:33:57.475 Test: blockdev write zeroes read no split ...passed 00:33:57.475 Test: blockdev write zeroes read split ...passed 00:33:57.733 Test: blockdev write zeroes read split partial ...passed 00:33:57.733 Test: blockdev reset ...passed 00:33:57.733 Test: blockdev write read 8 blocks ...passed 00:33:57.733 Test: blockdev write read size > 128k ...passed 00:33:57.733 Test: blockdev write read invalid size ...passed 00:33:57.733 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:57.733 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:57.733 Test: blockdev write read max offset ...passed 00:33:57.733 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:57.733 Test: blockdev writev readv 8 blocks ...passed 00:33:57.733 Test: blockdev writev readv 30 x 1block ...passed 00:33:57.733 Test: blockdev writev readv block ...passed 00:33:57.733 Test: blockdev writev readv size > 128k ...passed 00:33:57.733 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:57.733 Test: blockdev comparev and writev ...passed 00:33:57.733 Test: blockdev nvme passthru rw ...passed 00:33:57.733 Test: blockdev nvme passthru vendor specific ...passed 00:33:57.733 Test: blockdev nvme admin passthru ...passed 00:33:57.733 Test: blockdev copy ...passed 00:33:57.733 Suite: bdevio tests on: crypto_ram1 00:33:57.733 Test: blockdev write read block ...passed 00:33:57.733 Test: blockdev write zeroes read block ...passed 00:33:57.733 Test: blockdev write zeroes read no split ...passed 00:33:57.733 Test: blockdev write zeroes read split ...passed 00:33:57.991 Test: blockdev write zeroes read split partial ...passed 00:33:57.991 Test: blockdev reset ...passed 00:33:57.991 Test: blockdev write read 8 blocks ...passed 00:33:57.991 Test: blockdev write read size > 128k ...passed 00:33:57.991 Test: blockdev write read invalid size ...passed 00:33:57.991 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:57.991 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:57.991 Test: blockdev write read max offset ...passed 00:33:57.991 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:57.991 Test: blockdev writev readv 8 blocks ...passed 00:33:57.991 Test: blockdev writev readv 30 x 1block ...passed 00:33:57.991 Test: blockdev writev readv block ...passed 00:33:57.991 Test: blockdev writev readv size > 128k ...passed 00:33:57.991 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:57.991 Test: blockdev comparev and writev ...passed 00:33:57.991 Test: blockdev nvme passthru rw ...passed 00:33:57.991 Test: blockdev nvme passthru vendor specific ...passed 00:33:57.991 Test: blockdev nvme admin passthru ...passed 00:33:57.991 Test: blockdev copy ...passed 00:33:57.991 Suite: bdevio tests on: crypto_ram 00:33:57.991 Test: blockdev write read block ...passed 00:33:57.991 Test: blockdev write zeroes read block ...passed 00:33:57.991 Test: blockdev write zeroes read no split ...passed 00:33:58.249 Test: blockdev write zeroes read split ...passed 00:33:58.249 Test: blockdev write zeroes read split partial ...passed 00:33:58.249 Test: blockdev reset ...passed 00:33:58.249 Test: blockdev write read 8 blocks ...passed 00:33:58.249 Test: blockdev write read size > 128k ...passed 00:33:58.249 Test: blockdev write read invalid size ...passed 00:33:58.249 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:58.249 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:58.249 Test: blockdev write read max offset ...passed 00:33:58.249 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:58.249 Test: blockdev writev readv 8 blocks ...passed 00:33:58.249 Test: blockdev writev readv 30 x 1block ...passed 00:33:58.249 Test: blockdev writev readv block ...passed 00:33:58.249 Test: blockdev writev readv size > 128k ...passed 00:33:58.249 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:58.249 Test: blockdev comparev and writev ...passed 00:33:58.249 Test: blockdev nvme passthru rw ...passed 00:33:58.249 Test: blockdev nvme passthru vendor specific ...passed 00:33:58.249 Test: blockdev nvme admin passthru ...passed 00:33:58.249 Test: blockdev copy ...passed 00:33:58.249 00:33:58.249 Run Summary: Type Total Ran Passed Failed Inactive 00:33:58.249 suites 4 4 n/a 0 0 00:33:58.249 tests 92 92 92 0 0 00:33:58.249 asserts 520 520 520 0 n/a 00:33:58.249 00:33:58.249 Elapsed time = 1.566 seconds 00:33:58.249 0 00:33:58.249 18:48:43 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 3000235 00:33:58.249 18:48:43 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 3000235 ']' 00:33:58.249 18:48:43 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 3000235 00:33:58.249 18:48:43 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:33:58.249 18:48:43 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:58.249 18:48:43 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3000235 00:33:58.250 18:48:43 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:58.250 18:48:43 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:58.250 18:48:43 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3000235' 00:33:58.250 killing process with pid 3000235 00:33:58.250 18:48:43 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@967 -- # kill 3000235 00:33:58.250 18:48:43 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@972 -- # wait 3000235 00:33:58.817 18:48:44 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:33:58.817 00:33:58.817 real 0m4.075s 00:33:58.817 user 0m11.181s 00:33:58.817 sys 0m0.465s 00:33:58.817 18:48:44 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:58.817 18:48:44 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:33:58.817 ************************************ 00:33:58.817 END TEST bdev_bounds 00:33:58.817 ************************************ 00:33:58.817 18:48:44 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:33:58.817 18:48:44 blockdev_crypto_qat -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '' 00:33:58.817 18:48:44 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:33:58.817 18:48:44 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:58.817 18:48:44 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:33:58.817 ************************************ 00:33:58.817 START TEST bdev_nbd 00:33:58.817 ************************************ 00:33:58.817 18:48:44 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '' 00:33:58.817 18:48:44 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:33:58.817 18:48:44 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:33:58.817 18:48:44 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:58.817 18:48:44 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:33:58.817 18:48:44 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:33:58.817 18:48:44 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:33:58.817 18:48:44 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=4 00:33:58.817 18:48:44 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:33:58.817 18:48:44 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:33:58.817 18:48:44 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:33:58.817 18:48:44 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=4 00:33:58.817 18:48:44 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:33:58.817 18:48:44 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:33:58.817 18:48:44 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:33:58.817 18:48:44 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:33:58.817 18:48:44 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=3000899 00:33:58.817 18:48:44 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:33:58.817 18:48:44 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:33:58.817 18:48:44 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 3000899 /var/tmp/spdk-nbd.sock 00:33:58.817 18:48:44 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 3000899 ']' 00:33:58.817 18:48:44 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:33:58.817 18:48:44 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:58.817 18:48:44 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:33:58.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:33:58.817 18:48:44 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:58.817 18:48:44 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:33:58.817 [2024-07-15 18:48:44.215681] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:33:58.817 [2024-07-15 18:48:44.215739] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:58.817 [2024-07-15 18:48:44.315301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:59.076 [2024-07-15 18:48:44.410812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:59.076 [2024-07-15 18:48:44.432120] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:33:59.076 [2024-07-15 18:48:44.440139] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:33:59.076 [2024-07-15 18:48:44.448158] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:33:59.076 [2024-07-15 18:48:44.551605] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:34:01.606 [2024-07-15 18:48:46.726622] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:34:01.606 [2024-07-15 18:48:46.726678] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:34:01.606 [2024-07-15 18:48:46.726691] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:01.606 [2024-07-15 18:48:46.734645] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:34:01.606 [2024-07-15 18:48:46.734663] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:34:01.606 [2024-07-15 18:48:46.734671] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:01.606 [2024-07-15 18:48:46.742664] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:34:01.606 [2024-07-15 18:48:46.742679] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:34:01.606 [2024-07-15 18:48:46.742687] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:01.606 [2024-07-15 18:48:46.750684] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:34:01.606 [2024-07-15 18:48:46.750698] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:34:01.606 [2024-07-15 18:48:46.750706] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:01.890 18:48:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:01.890 18:48:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:34:01.890 18:48:47 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' 00:34:01.890 18:48:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:01.890 18:48:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:34:01.890 18:48:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:34:01.890 18:48:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' 00:34:01.890 18:48:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:01.890 18:48:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:34:01.890 18:48:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:34:01.890 18:48:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:34:01.890 18:48:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:34:01.890 18:48:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:34:01.890 18:48:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:34:01.890 18:48:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:34:02.148 18:48:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:34:02.148 18:48:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:34:02.148 18:48:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:34:02.148 18:48:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:34:02.148 18:48:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:34:02.148 18:48:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:34:02.148 18:48:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:34:02.148 18:48:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:34:02.148 18:48:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:34:02.148 18:48:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:34:02.148 18:48:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:34:02.148 18:48:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:02.148 1+0 records in 00:34:02.148 1+0 records out 00:34:02.148 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294717 s, 13.9 MB/s 00:34:02.148 18:48:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:02.406 18:48:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:34:02.406 18:48:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:02.406 18:48:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:34:02.406 18:48:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:34:02.406 18:48:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:02.406 18:48:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:34:02.406 18:48:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram1 00:34:02.664 18:48:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:34:02.664 18:48:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:34:02.664 18:48:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:34:02.664 18:48:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:34:02.664 18:48:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:34:02.664 18:48:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:34:02.664 18:48:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:34:02.664 18:48:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:34:02.664 18:48:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:34:02.664 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:34:02.664 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:34:02.664 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:02.664 1+0 records in 00:34:02.664 1+0 records out 00:34:02.664 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238746 s, 17.2 MB/s 00:34:02.664 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:02.664 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:34:02.664 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:02.664 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:34:02.664 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:34:02.664 18:48:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:02.664 18:48:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:34:02.664 18:48:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 00:34:02.922 18:48:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:34:02.922 18:48:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:34:02.922 18:48:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:34:02.922 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:34:02.922 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:34:02.922 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:34:02.922 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:34:02.922 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:34:02.922 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:34:02.923 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:34:02.923 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:34:02.923 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:02.923 1+0 records in 00:34:02.923 1+0 records out 00:34:02.923 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284878 s, 14.4 MB/s 00:34:02.923 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:02.923 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:34:02.923 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:02.923 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:34:02.923 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:34:02.923 18:48:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:02.923 18:48:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:34:02.923 18:48:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:34:03.182 18:48:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:34:03.182 18:48:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:34:03.182 18:48:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:34:03.182 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:34:03.182 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:34:03.182 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:34:03.182 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:34:03.182 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:34:03.182 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:34:03.182 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:34:03.182 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:34:03.182 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:03.182 1+0 records in 00:34:03.182 1+0 records out 00:34:03.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241071 s, 17.0 MB/s 00:34:03.182 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:03.182 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:34:03.182 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:03.182 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:34:03.182 18:48:48 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:34:03.182 18:48:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:03.182 18:48:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:34:03.182 18:48:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:03.440 18:48:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:34:03.440 { 00:34:03.440 "nbd_device": "/dev/nbd0", 00:34:03.440 "bdev_name": "crypto_ram" 00:34:03.440 }, 00:34:03.440 { 00:34:03.440 "nbd_device": "/dev/nbd1", 00:34:03.440 "bdev_name": "crypto_ram1" 00:34:03.440 }, 00:34:03.440 { 00:34:03.440 "nbd_device": "/dev/nbd2", 00:34:03.440 "bdev_name": "crypto_ram2" 00:34:03.440 }, 00:34:03.440 { 00:34:03.440 "nbd_device": "/dev/nbd3", 00:34:03.440 "bdev_name": "crypto_ram3" 00:34:03.440 } 00:34:03.440 ]' 00:34:03.440 18:48:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:34:03.440 18:48:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:34:03.440 { 00:34:03.440 "nbd_device": "/dev/nbd0", 00:34:03.440 "bdev_name": "crypto_ram" 00:34:03.440 }, 00:34:03.440 { 00:34:03.440 "nbd_device": "/dev/nbd1", 00:34:03.440 "bdev_name": "crypto_ram1" 00:34:03.440 }, 00:34:03.440 { 00:34:03.440 "nbd_device": "/dev/nbd2", 00:34:03.440 "bdev_name": "crypto_ram2" 00:34:03.441 }, 00:34:03.441 { 00:34:03.441 "nbd_device": "/dev/nbd3", 00:34:03.441 "bdev_name": "crypto_ram3" 00:34:03.441 } 00:34:03.441 ]' 00:34:03.441 18:48:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:34:03.441 18:48:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3' 00:34:03.441 18:48:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:03.441 18:48:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3') 00:34:03.441 18:48:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:03.441 18:48:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:34:03.441 18:48:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:03.441 18:48:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:34:03.699 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:03.699 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:03.699 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:03.699 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:03.699 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:03.699 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:03.699 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:03.699 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:03.699 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:03.699 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:34:03.958 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:03.958 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:03.958 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:03.958 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:03.958 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:03.958 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:03.958 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:03.958 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:03.958 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:03.958 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:34:04.216 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:34:04.216 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:34:04.216 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:34:04.216 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:04.216 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:04.216 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:34:04.216 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:04.216 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:04.216 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:04.216 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:34:04.475 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:34:04.475 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:34:04.475 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:34:04.475 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:04.475 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:04.475 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:34:04.475 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:04.475 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:04.475 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:34:04.475 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:04.475 18:48:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:04.733 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:34:04.733 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:34:04.733 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:04.733 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:34:04.733 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:34:04.733 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:04.733 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:34:04.733 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:34:04.733 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:34:04.733 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:34:04.733 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:34:04.733 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:34:04.733 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:34:04.733 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:04.733 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:34:04.733 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:34:04.733 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:34:04.733 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:34:04.733 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:34:04.733 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:04.733 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:34:04.733 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:04.733 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:34:04.733 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:04.733 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:34:04.733 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:04.733 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:34:04.733 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:34:04.990 /dev/nbd0 00:34:04.990 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:04.990 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:04.990 18:48:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:34:04.990 18:48:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:34:04.990 18:48:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:34:04.990 18:48:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:34:04.990 18:48:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:34:04.990 18:48:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:34:04.990 18:48:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:34:04.990 18:48:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:34:04.990 18:48:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:04.990 1+0 records in 00:34:04.990 1+0 records out 00:34:04.990 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253785 s, 16.1 MB/s 00:34:04.990 18:48:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:04.990 18:48:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:34:04.990 18:48:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:04.990 18:48:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:34:04.990 18:48:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:34:04.990 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:04.990 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:34:04.990 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram1 /dev/nbd1 00:34:05.249 /dev/nbd1 00:34:05.249 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:34:05.249 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:34:05.250 18:48:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:34:05.250 18:48:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:34:05.250 18:48:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:34:05.250 18:48:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:34:05.250 18:48:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:34:05.250 18:48:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:34:05.250 18:48:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:34:05.250 18:48:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:34:05.250 18:48:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:05.250 1+0 records in 00:34:05.250 1+0 records out 00:34:05.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029156 s, 14.0 MB/s 00:34:05.250 18:48:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:05.250 18:48:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:34:05.250 18:48:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:05.250 18:48:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:34:05.250 18:48:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:34:05.250 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:05.250 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:34:05.250 18:48:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 /dev/nbd10 00:34:05.508 /dev/nbd10 00:34:05.508 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:34:05.508 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:34:05.508 18:48:51 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:34:05.508 18:48:51 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:34:05.508 18:48:51 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:34:05.508 18:48:51 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:34:05.508 18:48:51 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:34:05.766 18:48:51 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:34:05.766 18:48:51 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:34:05.766 18:48:51 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:34:05.766 18:48:51 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:05.766 1+0 records in 00:34:05.766 1+0 records out 00:34:05.766 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244721 s, 16.7 MB/s 00:34:05.766 18:48:51 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:05.766 18:48:51 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:34:05.766 18:48:51 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:05.766 18:48:51 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:34:05.766 18:48:51 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:34:05.766 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:05.766 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:34:05.766 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd11 00:34:05.766 /dev/nbd11 00:34:06.025 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:34:06.025 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:34:06.025 18:48:51 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:34:06.025 18:48:51 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:34:06.025 18:48:51 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:34:06.025 18:48:51 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:34:06.025 18:48:51 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:34:06.025 18:48:51 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:34:06.025 18:48:51 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:34:06.025 18:48:51 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:34:06.025 18:48:51 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:06.025 1+0 records in 00:34:06.025 1+0 records out 00:34:06.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303065 s, 13.5 MB/s 00:34:06.025 18:48:51 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:06.025 18:48:51 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:34:06.025 18:48:51 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:06.025 18:48:51 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:34:06.025 18:48:51 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:34:06.025 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:06.025 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:34:06.025 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:34:06.025 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:06.025 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:06.297 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:34:06.297 { 00:34:06.297 "nbd_device": "/dev/nbd0", 00:34:06.297 "bdev_name": "crypto_ram" 00:34:06.297 }, 00:34:06.297 { 00:34:06.297 "nbd_device": "/dev/nbd1", 00:34:06.297 "bdev_name": "crypto_ram1" 00:34:06.297 }, 00:34:06.297 { 00:34:06.297 "nbd_device": "/dev/nbd10", 00:34:06.297 "bdev_name": "crypto_ram2" 00:34:06.297 }, 00:34:06.297 { 00:34:06.297 "nbd_device": "/dev/nbd11", 00:34:06.297 "bdev_name": "crypto_ram3" 00:34:06.297 } 00:34:06.297 ]' 00:34:06.297 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:34:06.297 { 00:34:06.298 "nbd_device": "/dev/nbd0", 00:34:06.298 "bdev_name": "crypto_ram" 00:34:06.298 }, 00:34:06.298 { 00:34:06.298 "nbd_device": "/dev/nbd1", 00:34:06.298 "bdev_name": "crypto_ram1" 00:34:06.298 }, 00:34:06.298 { 00:34:06.298 "nbd_device": "/dev/nbd10", 00:34:06.298 "bdev_name": "crypto_ram2" 00:34:06.298 }, 00:34:06.298 { 00:34:06.298 "nbd_device": "/dev/nbd11", 00:34:06.298 "bdev_name": "crypto_ram3" 00:34:06.298 } 00:34:06.298 ]' 00:34:06.298 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:06.298 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:34:06.298 /dev/nbd1 00:34:06.298 /dev/nbd10 00:34:06.298 /dev/nbd11' 00:34:06.298 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:06.298 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:34:06.298 /dev/nbd1 00:34:06.298 /dev/nbd10 00:34:06.298 /dev/nbd11' 00:34:06.298 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=4 00:34:06.298 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 4 00:34:06.298 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=4 00:34:06.298 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 4 -ne 4 ']' 00:34:06.298 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' write 00:34:06.298 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:34:06.298 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:34:06.299 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:34:06.299 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:34:06.299 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:34:06.299 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:34:06.299 256+0 records in 00:34:06.299 256+0 records out 00:34:06.299 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103431 s, 101 MB/s 00:34:06.299 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:06.299 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:34:06.299 256+0 records in 00:34:06.299 256+0 records out 00:34:06.299 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0557605 s, 18.8 MB/s 00:34:06.299 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:06.299 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:34:06.299 256+0 records in 00:34:06.299 256+0 records out 00:34:06.299 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0478899 s, 21.9 MB/s 00:34:06.299 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:06.299 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:34:06.299 256+0 records in 00:34:06.299 256+0 records out 00:34:06.299 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0398462 s, 26.3 MB/s 00:34:06.300 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:06.300 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:34:06.563 256+0 records in 00:34:06.563 256+0 records out 00:34:06.563 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0367245 s, 28.6 MB/s 00:34:06.563 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' verify 00:34:06.563 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:34:06.563 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:34:06.563 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:34:06.563 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:34:06.563 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:34:06.563 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:34:06.563 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:06.563 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:34:06.563 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:06.563 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:34:06.563 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:06.563 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:34:06.563 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:06.563 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:34:06.563 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:34:06.564 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:34:06.564 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:06.564 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:34:06.564 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:06.564 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:34:06.564 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:06.564 18:48:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:34:06.822 18:48:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:06.822 18:48:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:06.822 18:48:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:06.822 18:48:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:06.822 18:48:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:06.822 18:48:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:06.822 18:48:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:06.822 18:48:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:06.822 18:48:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:06.822 18:48:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:34:07.080 18:48:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:07.080 18:48:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:07.080 18:48:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:07.080 18:48:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:07.080 18:48:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:07.080 18:48:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:07.080 18:48:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:07.080 18:48:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:07.080 18:48:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:07.080 18:48:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:34:07.645 18:48:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:34:07.645 18:48:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:34:07.645 18:48:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:34:07.645 18:48:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:07.645 18:48:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:07.645 18:48:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:34:07.645 18:48:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:07.645 18:48:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:07.645 18:48:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:07.645 18:48:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:34:07.903 18:48:53 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:34:07.903 18:48:53 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:34:07.903 18:48:53 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:34:07.903 18:48:53 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:07.903 18:48:53 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:07.903 18:48:53 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:34:07.903 18:48:53 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:07.903 18:48:53 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:07.903 18:48:53 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:34:07.903 18:48:53 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:07.903 18:48:53 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:08.160 18:48:53 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:34:08.160 18:48:53 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:34:08.160 18:48:53 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:08.160 18:48:53 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:34:08.160 18:48:53 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:34:08.160 18:48:53 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:08.160 18:48:53 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:34:08.160 18:48:53 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:34:08.160 18:48:53 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:34:08.160 18:48:53 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:34:08.160 18:48:53 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:34:08.160 18:48:53 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:34:08.160 18:48:53 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:34:08.160 18:48:53 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:08.160 18:48:53 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:34:08.160 18:48:53 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:34:08.160 18:48:53 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:34:08.160 18:48:53 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:34:08.417 malloc_lvol_verify 00:34:08.417 18:48:53 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:34:08.674 7d3091e3-c07e-4787-9985-f7da0290866b 00:34:08.674 18:48:54 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:34:08.930 99f89875-3de2-4d31-ab51-b185daf3cb06 00:34:08.930 18:48:54 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:34:09.186 /dev/nbd0 00:34:09.186 18:48:54 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:34:09.186 mke2fs 1.46.5 (30-Dec-2021) 00:34:09.186 Discarding device blocks: 0/4096 done 00:34:09.186 Creating filesystem with 4096 1k blocks and 1024 inodes 00:34:09.186 00:34:09.186 Allocating group tables: 0/1 done 00:34:09.186 Writing inode tables: 0/1 done 00:34:09.186 Creating journal (1024 blocks): done 00:34:09.186 Writing superblocks and filesystem accounting information: 0/1 done 00:34:09.186 00:34:09.186 18:48:54 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:34:09.186 18:48:54 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:34:09.186 18:48:54 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:09.186 18:48:54 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:09.186 18:48:54 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:09.186 18:48:54 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:34:09.186 18:48:54 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:09.186 18:48:54 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:34:09.443 18:48:54 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:09.443 18:48:54 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:09.443 18:48:54 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:09.443 18:48:54 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:09.443 18:48:54 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:09.443 18:48:54 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:09.443 18:48:54 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:09.443 18:48:54 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:09.443 18:48:54 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:34:09.443 18:48:54 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:34:09.443 18:48:54 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 3000899 00:34:09.443 18:48:54 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 3000899 ']' 00:34:09.443 18:48:54 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 3000899 00:34:09.443 18:48:54 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:34:09.443 18:48:54 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:09.444 18:48:54 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3000899 00:34:09.444 18:48:54 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:09.444 18:48:54 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:09.444 18:48:54 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3000899' 00:34:09.444 killing process with pid 3000899 00:34:09.444 18:48:54 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@967 -- # kill 3000899 00:34:09.444 18:48:54 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@972 -- # wait 3000899 00:34:10.010 18:48:55 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:34:10.010 00:34:10.010 real 0m11.139s 00:34:10.010 user 0m16.017s 00:34:10.010 sys 0m3.308s 00:34:10.010 18:48:55 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:10.010 18:48:55 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:34:10.010 ************************************ 00:34:10.010 END TEST bdev_nbd 00:34:10.010 ************************************ 00:34:10.010 18:48:55 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:34:10.010 18:48:55 blockdev_crypto_qat -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:34:10.010 18:48:55 blockdev_crypto_qat -- bdev/blockdev.sh@764 -- # '[' crypto_qat = nvme ']' 00:34:10.010 18:48:55 blockdev_crypto_qat -- bdev/blockdev.sh@764 -- # '[' crypto_qat = gpt ']' 00:34:10.010 18:48:55 blockdev_crypto_qat -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:34:10.010 18:48:55 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:10.010 18:48:55 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:10.010 18:48:55 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:34:10.010 ************************************ 00:34:10.010 START TEST bdev_fio 00:34:10.010 ************************************ 00:34:10.010 18:48:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:34:10.010 18:48:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:34:10.010 18:48:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:34:10.010 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:34:10.010 18:48:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:34:10.010 18:48:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:34:10.010 18:48:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:34:10.010 18:48:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:34:10.010 18:48:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:34:10.010 18:48:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:34:10.010 18:48:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:34:10.010 18:48:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:34:10.010 18:48:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:34:10.010 18:48:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:34:10.010 18:48:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:34:10.010 18:48:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:34:10.010 18:48:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:34:10.010 18:48:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:34:10.010 18:48:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:34:10.010 18:48:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:34:10.010 18:48:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:34:10.010 18:48:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:34:10.010 18:48:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:34:10.010 18:48:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:34:10.010 18:48:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:34:10.010 18:48:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:34:10.010 18:48:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram]' 00:34:10.010 18:48:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram 00:34:10.010 18:48:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:34:10.010 18:48:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram1]' 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram1 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram2]' 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram2 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram3]' 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram3 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:34:10.011 ************************************ 00:34:10.011 START TEST bdev_fio_rw_verify 00:34:10.011 ************************************ 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:10.011 18:48:55 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:34:10.268 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:10.268 job_crypto_ram1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:10.268 job_crypto_ram2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:10.268 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:10.268 fio-3.35 00:34:10.268 Starting 4 threads 00:34:25.147 00:34:25.147 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=3003321: Mon Jul 15 18:49:08 2024 00:34:25.147 read: IOPS=22.9k, BW=89.6MiB/s (94.0MB/s)(896MiB/10001msec) 00:34:25.147 slat (usec): min=11, max=445, avg=60.38, stdev=42.99 00:34:25.147 clat (usec): min=15, max=1584, avg=328.18, stdev=244.33 00:34:25.147 lat (usec): min=54, max=1768, avg=388.56, stdev=271.61 00:34:25.147 clat percentiles (usec): 00:34:25.147 | 50.000th=[ 245], 99.000th=[ 1172], 99.900th=[ 1369], 99.990th=[ 1467], 00:34:25.147 | 99.999th=[ 1549] 00:34:25.147 write: IOPS=25.3k, BW=98.9MiB/s (104MB/s)(965MiB/9754msec); 0 zone resets 00:34:25.147 slat (usec): min=17, max=498, avg=71.42, stdev=44.26 00:34:25.147 clat (usec): min=17, max=2277, avg=368.34, stdev=262.49 00:34:25.147 lat (usec): min=57, max=2511, avg=439.75, stdev=290.58 00:34:25.147 clat percentiles (usec): 00:34:25.147 | 50.000th=[ 289], 99.000th=[ 1237], 99.900th=[ 1450], 99.990th=[ 1598], 00:34:25.147 | 99.999th=[ 2180] 00:34:25.147 bw ( KiB/s): min=71712, max=144387, per=98.16%, avg=99419.53, stdev=5738.68, samples=76 00:34:25.147 iops : min=17928, max=36096, avg=24854.84, stdev=1434.64, samples=76 00:34:25.147 lat (usec) : 20=0.01%, 50=0.04%, 100=6.83%, 250=39.31%, 500=33.30% 00:34:25.147 lat (usec) : 750=11.33%, 1000=5.92% 00:34:25.147 lat (msec) : 2=3.27%, 4=0.01% 00:34:25.147 cpu : usr=99.64%, sys=0.00%, ctx=90, majf=0, minf=275 00:34:25.147 IO depths : 1=3.7%, 2=27.5%, 4=55.0%, 8=13.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:25.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.147 complete : 0=0.0%, 4=87.9%, 8=12.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.147 issued rwts: total=229417,246969,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:25.147 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:25.147 00:34:25.147 Run status group 0 (all jobs): 00:34:25.147 READ: bw=89.6MiB/s (94.0MB/s), 89.6MiB/s-89.6MiB/s (94.0MB/s-94.0MB/s), io=896MiB (940MB), run=10001-10001msec 00:34:25.147 WRITE: bw=98.9MiB/s (104MB/s), 98.9MiB/s-98.9MiB/s (104MB/s-104MB/s), io=965MiB (1012MB), run=9754-9754msec 00:34:25.147 00:34:25.147 real 0m13.413s 00:34:25.147 user 0m49.337s 00:34:25.147 sys 0m0.403s 00:34:25.147 18:49:08 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:25.147 18:49:08 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:34:25.147 ************************************ 00:34:25.147 END TEST bdev_fio_rw_verify 00:34:25.147 ************************************ 00:34:25.147 18:49:08 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:34:25.147 18:49:08 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:34:25.147 18:49:08 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:34:25.147 18:49:08 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:34:25.147 18:49:08 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:34:25.147 18:49:08 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:34:25.147 18:49:08 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:34:25.147 18:49:08 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:34:25.147 18:49:08 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:34:25.147 18:49:08 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:34:25.147 18:49:08 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:34:25.147 18:49:08 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:34:25.147 18:49:08 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:34:25.147 18:49:08 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:34:25.147 18:49:08 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:34:25.147 18:49:08 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:34:25.147 18:49:08 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:34:25.147 18:49:08 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:34:25.148 18:49:08 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "05deb763-e1aa-5f79-b0af-5840d663e0d2"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "05deb763-e1aa-5f79-b0af-5840d663e0d2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "37504b5f-5614-5991-b833-cb40b207f200"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "37504b5f-5614-5991-b833-cb40b207f200",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "8ed3e307-d2ce-52be-8411-68d9c5c875f9"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "8ed3e307-d2ce-52be-8411-68d9c5c875f9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "3aacaf16-52d4-5299-8e2e-a9226a8fe6bb"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "3aacaf16-52d4-5299-8e2e-a9226a8fe6bb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:34:25.148 18:49:08 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n crypto_ram 00:34:25.148 crypto_ram1 00:34:25.148 crypto_ram2 00:34:25.148 crypto_ram3 ]] 00:34:25.148 18:49:08 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:34:25.148 18:49:08 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "05deb763-e1aa-5f79-b0af-5840d663e0d2"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "05deb763-e1aa-5f79-b0af-5840d663e0d2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "37504b5f-5614-5991-b833-cb40b207f200"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "37504b5f-5614-5991-b833-cb40b207f200",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "8ed3e307-d2ce-52be-8411-68d9c5c875f9"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "8ed3e307-d2ce-52be-8411-68d9c5c875f9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "3aacaf16-52d4-5299-8e2e-a9226a8fe6bb"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "3aacaf16-52d4-5299-8e2e-a9226a8fe6bb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:34:25.148 18:49:08 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:34:25.148 18:49:08 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram]' 00:34:25.148 18:49:08 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram 00:34:25.148 18:49:08 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:34:25.148 18:49:08 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram1]' 00:34:25.148 18:49:08 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram1 00:34:25.148 18:49:08 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:34:25.148 18:49:08 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram2]' 00:34:25.148 18:49:08 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram2 00:34:25.148 18:49:08 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:34:25.148 18:49:08 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram3]' 00:34:25.148 18:49:08 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram3 00:34:25.148 18:49:08 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:34:25.148 18:49:08 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:34:25.148 18:49:08 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:25.148 18:49:08 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:34:25.148 ************************************ 00:34:25.148 START TEST bdev_fio_trim 00:34:25.148 ************************************ 00:34:25.148 18:49:08 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:34:25.148 18:49:08 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:34:25.148 18:49:08 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:25.148 18:49:08 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:25.148 18:49:08 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:25.148 18:49:08 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:34:25.149 18:49:08 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:34:25.149 18:49:08 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:25.149 18:49:08 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:25.149 18:49:08 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:34:25.149 18:49:08 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:34:25.149 18:49:08 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:25.149 18:49:09 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:25.149 18:49:09 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:25.149 18:49:09 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:25.149 18:49:09 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:34:25.149 18:49:09 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:25.149 18:49:09 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:25.149 18:49:09 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:25.149 18:49:09 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:25.149 18:49:09 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:25.149 18:49:09 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:34:25.149 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:25.149 job_crypto_ram1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:25.149 job_crypto_ram2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:25.149 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:25.149 fio-3.35 00:34:25.149 Starting 4 threads 00:34:37.362 00:34:37.362 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=3005551: Mon Jul 15 18:49:22 2024 00:34:37.362 write: IOPS=34.9k, BW=136MiB/s (143MB/s)(1365MiB/10001msec); 0 zone resets 00:34:37.362 slat (usec): min=19, max=444, avg=67.64, stdev=34.17 00:34:37.362 clat (usec): min=47, max=1523, avg=238.83, stdev=130.66 00:34:37.362 lat (usec): min=72, max=1606, avg=306.47, stdev=147.71 00:34:37.362 clat percentiles (usec): 00:34:37.362 | 50.000th=[ 219], 99.000th=[ 676], 99.900th=[ 807], 99.990th=[ 881], 00:34:37.362 | 99.999th=[ 1303] 00:34:37.362 bw ( KiB/s): min=125504, max=199766, per=100.00%, avg=140400.32, stdev=4881.90, samples=76 00:34:37.362 iops : min=31376, max=49941, avg=35100.05, stdev=1220.46, samples=76 00:34:37.362 trim: IOPS=34.9k, BW=136MiB/s (143MB/s)(1365MiB/10001msec); 0 zone resets 00:34:37.362 slat (usec): min=4, max=315, avg=18.51, stdev= 6.80 00:34:37.362 clat (usec): min=72, max=1606, avg=306.66, stdev=147.72 00:34:37.362 lat (usec): min=80, max=1623, avg=325.16, stdev=149.56 00:34:37.362 clat percentiles (usec): 00:34:37.362 | 50.000th=[ 281], 99.000th=[ 816], 99.900th=[ 971], 99.990th=[ 1057], 00:34:37.362 | 99.999th=[ 1352] 00:34:37.362 bw ( KiB/s): min=125504, max=199766, per=100.00%, avg=140400.32, stdev=4881.90, samples=76 00:34:37.362 iops : min=31376, max=49941, avg=35100.05, stdev=1220.46, samples=76 00:34:37.362 lat (usec) : 50=0.01%, 100=6.81%, 250=43.42%, 500=42.94%, 750=5.77% 00:34:37.362 lat (usec) : 1000=1.03% 00:34:37.362 lat (msec) : 2=0.02% 00:34:37.362 cpu : usr=99.63%, sys=0.00%, ctx=106, majf=0, minf=105 00:34:37.362 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:37.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.362 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.362 issued rwts: total=0,349472,349472,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.362 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:37.362 00:34:37.362 Run status group 0 (all jobs): 00:34:37.362 WRITE: bw=136MiB/s (143MB/s), 136MiB/s-136MiB/s (143MB/s-143MB/s), io=1365MiB (1431MB), run=10001-10001msec 00:34:37.362 TRIM: bw=136MiB/s (143MB/s), 136MiB/s-136MiB/s (143MB/s-143MB/s), io=1365MiB (1431MB), run=10001-10001msec 00:34:37.362 00:34:37.362 real 0m13.525s 00:34:37.362 user 0m48.773s 00:34:37.362 sys 0m0.425s 00:34:37.362 18:49:22 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:37.362 18:49:22 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:34:37.362 ************************************ 00:34:37.362 END TEST bdev_fio_trim 00:34:37.362 ************************************ 00:34:37.362 18:49:22 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:34:37.362 18:49:22 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:34:37.362 18:49:22 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:34:37.362 18:49:22 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:34:37.362 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:34:37.362 18:49:22 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:34:37.362 00:34:37.362 real 0m27.212s 00:34:37.362 user 1m38.281s 00:34:37.362 sys 0m0.951s 00:34:37.362 18:49:22 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:37.362 18:49:22 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:34:37.362 ************************************ 00:34:37.362 END TEST bdev_fio 00:34:37.362 ************************************ 00:34:37.362 18:49:22 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:34:37.362 18:49:22 blockdev_crypto_qat -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:37.362 18:49:22 blockdev_crypto_qat -- bdev/blockdev.sh@777 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:34:37.362 18:49:22 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:34:37.362 18:49:22 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:37.362 18:49:22 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:34:37.362 ************************************ 00:34:37.362 START TEST bdev_verify 00:34:37.362 ************************************ 00:34:37.362 18:49:22 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:34:37.362 [2024-07-15 18:49:22.644919] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:34:37.362 [2024-07-15 18:49:22.644996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3007066 ] 00:34:37.362 [2024-07-15 18:49:22.746878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:37.362 [2024-07-15 18:49:22.845238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:37.362 [2024-07-15 18:49:22.845243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:37.362 [2024-07-15 18:49:22.866770] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:34:37.362 [2024-07-15 18:49:22.874799] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:34:37.362 [2024-07-15 18:49:22.882821] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:34:37.622 [2024-07-15 18:49:22.987121] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:34:40.170 [2024-07-15 18:49:25.155340] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:34:40.170 [2024-07-15 18:49:25.155416] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:34:40.170 [2024-07-15 18:49:25.155429] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:40.170 [2024-07-15 18:49:25.163364] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:34:40.170 [2024-07-15 18:49:25.163382] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:34:40.170 [2024-07-15 18:49:25.163391] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:40.170 [2024-07-15 18:49:25.171384] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:34:40.170 [2024-07-15 18:49:25.171401] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:34:40.170 [2024-07-15 18:49:25.171409] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:40.170 [2024-07-15 18:49:25.179411] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:34:40.170 [2024-07-15 18:49:25.179425] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:34:40.170 [2024-07-15 18:49:25.179433] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:40.170 Running I/O for 5 seconds... 00:34:45.435 00:34:45.435 Latency(us) 00:34:45.435 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:45.435 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:45.435 Verification LBA range: start 0x0 length 0x1000 00:34:45.435 crypto_ram : 5.08 441.10 1.72 0.00 0.00 288613.26 3183.18 179755.89 00:34:45.435 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:45.435 Verification LBA range: start 0x1000 length 0x1000 00:34:45.435 crypto_ram : 5.09 352.19 1.38 0.00 0.00 362505.87 6428.77 225693.50 00:34:45.435 Job: crypto_ram1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:45.435 Verification LBA range: start 0x0 length 0x1000 00:34:45.435 crypto_ram1 : 5.08 444.08 1.73 0.00 0.00 286130.57 3370.42 165774.87 00:34:45.435 Job: crypto_ram1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:45.435 Verification LBA range: start 0x1000 length 0x1000 00:34:45.435 crypto_ram1 : 5.09 352.08 1.38 0.00 0.00 361078.16 6928.09 205720.62 00:34:45.435 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:45.435 Verification LBA range: start 0x0 length 0x1000 00:34:45.435 crypto_ram2 : 5.06 3438.69 13.43 0.00 0.00 36895.19 5242.88 29085.50 00:34:45.435 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:45.435 Verification LBA range: start 0x1000 length 0x1000 00:34:45.435 crypto_ram2 : 5.07 2699.49 10.54 0.00 0.00 46888.95 8550.89 34952.53 00:34:45.435 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:45.435 Verification LBA range: start 0x0 length 0x1000 00:34:45.435 crypto_ram3 : 5.06 3437.48 13.43 0.00 0.00 36808.39 4244.24 29085.50 00:34:45.435 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:45.435 Verification LBA range: start 0x1000 length 0x1000 00:34:45.435 crypto_ram3 : 5.08 2698.31 10.54 0.00 0.00 46761.48 7926.74 34702.87 00:34:45.435 =================================================================================================================== 00:34:45.435 Total : 13863.41 54.15 0.00 0.00 73331.65 3183.18 225693.50 00:34:45.435 00:34:45.435 real 0m8.138s 00:34:45.435 user 0m15.526s 00:34:45.435 sys 0m0.309s 00:34:45.435 18:49:30 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:45.435 18:49:30 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:34:45.435 ************************************ 00:34:45.435 END TEST bdev_verify 00:34:45.435 ************************************ 00:34:45.435 18:49:30 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:34:45.435 18:49:30 blockdev_crypto_qat -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:34:45.435 18:49:30 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:34:45.435 18:49:30 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:45.435 18:49:30 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:34:45.435 ************************************ 00:34:45.435 START TEST bdev_verify_big_io 00:34:45.435 ************************************ 00:34:45.435 18:49:30 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:34:45.435 [2024-07-15 18:49:30.820862] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:34:45.435 [2024-07-15 18:49:30.820926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3008343 ] 00:34:45.435 [2024-07-15 18:49:30.919570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:45.693 [2024-07-15 18:49:31.012290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:45.694 [2024-07-15 18:49:31.012296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:45.694 [2024-07-15 18:49:31.033674] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:34:45.694 [2024-07-15 18:49:31.041699] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:34:45.694 [2024-07-15 18:49:31.049721] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:34:45.694 [2024-07-15 18:49:31.148334] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:34:48.222 [2024-07-15 18:49:33.326169] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:34:48.222 [2024-07-15 18:49:33.326239] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:34:48.222 [2024-07-15 18:49:33.326256] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:48.222 [2024-07-15 18:49:33.334185] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:34:48.222 [2024-07-15 18:49:33.334202] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:34:48.222 [2024-07-15 18:49:33.334211] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:48.222 [2024-07-15 18:49:33.342208] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:34:48.222 [2024-07-15 18:49:33.342223] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:34:48.222 [2024-07-15 18:49:33.342232] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:48.222 [2024-07-15 18:49:33.350232] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:34:48.222 [2024-07-15 18:49:33.350251] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:34:48.222 [2024-07-15 18:49:33.350259] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:48.222 Running I/O for 5 seconds... 00:34:49.163 [2024-07-15 18:49:34.461702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.462340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.462468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.462543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.462614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.462697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.463240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.463264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.468096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.468175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.468243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.468313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.468982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.469049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.469120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.469189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.469774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.469795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.474359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.474437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.474511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.474577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.475046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.475111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.475177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.475243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.475641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.475665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.479528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.479600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.479668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.479734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.480384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.480455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.480526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.480595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.481000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.481022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.484129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.484198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.484264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.484331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.484976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.485041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.485108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.485177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.485783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.485805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.489695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.489768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.489835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.489901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.490425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.490489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.490554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.490631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.491039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.491060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.495379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.495451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.495519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.495588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.496046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.496111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.496176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.496255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.496653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.496675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.500377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.500476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.500543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.500610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.501261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.501328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.501398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.501468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.501867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.501888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.504891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.504967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.505036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.505102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.505750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.505821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.505888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.163 [2024-07-15 18:49:34.505963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.506555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.506577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.510259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.510329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.510401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.510468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.510961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.511025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.511096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.511163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.511559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.511580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.515692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.515761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.515835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.515906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.516366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.516432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.516498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.516565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.516975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.516997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.520580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.520652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.520720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.520794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.521460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.521533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.521601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.521670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.522188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.522210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.525325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.525393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.525460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.525526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.525995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.526064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.526135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.526203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.526879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.526905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.530577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.530648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.530715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.530780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.531383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.531448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.531514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.531580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.531989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.532012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.535864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.535932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.536009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.536081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.536627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.536692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.536766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.536833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.537240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.537262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.540442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.540511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.540579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.540647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.541305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.541372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.541443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.541511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.542172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.542197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.545280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.545350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.545417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.545482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.545938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.546010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.546086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.546154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.546626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.546648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.550697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.550771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.550838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.550907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.551363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.551428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.551510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.551583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.552077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.552100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.555632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.555699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.555767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.555836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.556480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.164 [2024-07-15 18:49:34.556546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.556617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.556683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.557149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.557172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.560075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.560144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.560211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.560277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.560881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.560946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.561024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.561092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.561677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.561700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.565034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.565113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.565182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.565252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.565711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.565779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.565852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.565924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.566331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.566354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.570235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.570303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.570372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.570438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.570923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.570996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.571070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.571147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.571540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.571562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.574540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.574607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.574676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.574745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.575393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.575475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.575544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.575612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.576221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.576246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.579166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.579239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.579306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.579373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.579823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.579887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.579960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.580052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.580451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.580473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.584216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.584284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.584351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.584425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.584878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.584942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.585018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.585085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.585649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.585670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.588989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.589061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.589155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.589219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.589877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.589943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.590019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.590087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.590499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.590521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.593319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.593388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.593453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.593520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.593977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.594041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.594116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.594182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.594811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.594834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.598179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.598248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.598337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.598406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.598960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.599027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.599101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.599167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.599598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.599620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.602958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.603029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.603097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.603166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.165 [2024-07-15 18:49:34.603774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.603838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.603903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.603977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.604426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.604447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.607198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.607267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.607334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.607399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.607995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.608060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.608126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.608194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.608766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.608795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.611989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.612057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.612124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.612197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.612796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.612860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.612925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.613008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.613403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.613424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.616793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.616861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.616930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.616998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.617445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.617517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.617588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.617659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.618065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.618086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.621873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.622452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.623020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.624349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.627493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.629822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.632092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.634823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.635230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.635258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.641404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.644032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.645660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.648039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.651143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.651716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.652284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.652842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.653325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.653348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.658566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.660447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.661029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.661591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.664431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.667130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.669832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.671672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.672141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.672164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.676219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.678608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.681272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.683238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.685976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.688689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.689541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.690110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.690731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.690753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.695833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.698528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.700804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.701379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.702517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.704577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.706978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.166 [2024-07-15 18:49:34.709673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.710245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.710268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.713696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.714278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.716661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.719365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.721402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.723772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.726478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.727698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.728326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.728350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.733702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.735977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.738672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.741379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.742506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.743078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.744490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.746853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.747264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.747288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.750785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.751358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.751932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.752506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.753653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.754225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.754792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.755369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.755974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.755997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.760079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.760651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.761227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.761789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.762991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.763558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.764151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.764712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.765329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.765354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.769444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.770026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.770591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.426 [2024-07-15 18:49:34.771176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.772286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.772851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.773418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.773985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.774558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.774579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.778921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.779505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.780074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.780640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.781780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.782357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.782918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.783491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.784022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.784045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.788054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.788622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.789201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.789769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.790971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.791534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.792106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.792687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.793323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.793346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.797454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.798029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.798603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.799180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.800335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.800901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.801473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.802047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.802617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.802639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.806590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.807167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.807742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.808313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.809495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.810089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.810653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.811218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.811813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.811837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.815869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.816449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.817023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.817586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.818706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.819283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.819842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.820410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.820969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.820993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.825157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.825736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.826310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.826874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.828093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.828654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.829221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.829794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.830392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.830415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.834456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.835028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.835597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.836187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.837348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.837905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.838481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.839056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.839720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.839743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.843787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.844366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.844931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.846619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.849737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.851901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.854197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.856888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.857297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.857320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.863336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.865991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.867620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.869981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.873079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.873651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.874219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.874780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.875196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.875219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.880329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.882588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.427 [2024-07-15 18:49:34.883169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.883735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.886570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.889249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.891961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.893664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.894100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.894123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.897723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.900111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.902804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.905325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.908131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.910623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.912241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.912804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.913429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.913451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.917635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.920029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.922744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.923912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.925122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.925869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.928219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.930877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.931286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.931309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.934288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.934855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.936062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.938447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.941323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.943580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.946277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.948969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.949561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.949584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.955310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.957462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.959830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.962538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.963882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.964452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.965047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.966992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.967471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.967493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.972528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.973127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.973689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.428 [2024-07-15 18:49:34.974261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:34.977063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:34.979571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:34.981204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:34.983579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:34.983985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:34.984008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:34.988600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:34.990972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:34.993722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:34.995371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:34.998480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:35.001169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:35.001741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:35.002310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:35.002928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:35.002959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:35.007704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:35.010412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:35.012834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:35.013407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:35.014570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:35.016581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:35.018958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:35.021662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:35.022222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:35.022245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:35.025258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:35.025819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:35.028057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:35.030730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:35.032733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:35.035093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:35.037813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:35.040034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.688 [2024-07-15 18:49:35.040672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.040695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.046209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.047839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.050229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.052784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.053744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.054314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.054872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.057133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.057535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.057557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.062474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.063052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.063615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.064185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.066819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.069527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.071215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.073606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.074013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.074035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.079493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.082213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.084920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.084993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.087836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.090444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.091917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.092487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.093146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.093169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.097191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.099550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.102121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.103632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.103701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.104371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.104945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.105846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.108231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.110860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.111266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.111290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.113642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.113709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.113775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.113842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.114489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.114565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.114645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.114714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.114781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.115417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.115440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.117859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.117926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.117999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.118067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.118544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.118625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.118688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.118755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.118841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.119244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.119266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.122524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.122596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.122675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.122742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.123185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.123269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.123334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.123402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.123477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.123873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.123893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.126221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.126288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.126363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.126432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.127062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.127140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.127207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.127275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.127345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.127947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.127975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.130512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.130579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.130646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.130718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.131230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.131315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.131379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.131455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.131524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.131922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.131954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.689 [2024-07-15 18:49:35.134859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.134928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.135003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.135072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.135569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.135649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.135714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.135781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.135855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.136262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.136283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.138627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.138695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.138762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.138839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.139261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.139342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.139409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.139502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.139566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.140232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.140258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.143077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.143145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.143218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.143294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.143696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.143778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.143844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.143912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.143995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.144449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.144471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.147284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.147353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.147421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.147489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.148123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.148208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.148273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.148343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.148415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.148816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.148838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.151228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.151309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.151375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.151450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.151848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.151929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.152000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.152069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.152135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.152675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.152697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.155678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.155745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.155811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.155876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.156287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.156377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.156442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.156509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.156576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.157155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.157177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.159701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.159771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.159845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.159912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.160481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.160560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.160627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.160697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.160768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.161352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.161373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.163725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.163792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.163867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.163967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.164361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.164450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.164513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.164579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.164646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.165049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.165070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.168577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.168646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.168722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.168801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.169210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.169293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.169359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.169427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.169493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.169888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.169908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.690 [2024-07-15 18:49:35.172329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.172397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.172466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.172534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.173181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.173261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.173329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.173397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.173474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.174160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.174183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.176535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.176604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.176680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.176750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.177229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.177312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.177378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.177445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.177513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.177909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.177930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.181082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.181151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.181221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.181302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.181698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.181790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.181857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.181923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.181998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.182395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.182415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.185029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.185098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.185165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.185234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.185865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.185943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.186033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.186099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.186167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.186776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.186799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.190003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.190072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.190141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.190209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.190732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.190820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.190896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.190972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.191072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.191691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.191713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.194917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.195019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.195092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.195160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.195760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.195836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.195905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.195984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.196053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.196701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.196723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.200100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.200170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.200238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.200308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.200910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.201016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.201091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.201157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.201225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.201795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.201816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.205171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.205250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.205320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.205389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.206037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.206116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.206191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.206260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.206330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.206937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.206964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.210243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.210311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.210378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.210445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.211064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.211142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.211210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.211290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.211373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.211971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.211994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.215353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.691 [2024-07-15 18:49:35.215436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.215514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.215582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.216185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.216273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.216339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.216405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.216472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.217084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.217107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.220530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.220599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.220668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.220735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.221359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.221444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.221513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.221584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.221652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.222231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.222253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.225575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.225644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.225725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.225819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.226412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.226499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.226566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.226633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.226698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.227384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.227407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.230617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.230685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.230753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.230819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.231438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.231544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.231608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.231674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.231740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.232382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.232404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.235581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.235653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.235736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.235804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.236410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.236499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.236566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.236647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.236745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.237386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.692 [2024-07-15 18:49:35.237409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.240657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.240727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.240820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.240883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.241520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.241604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.241672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.241740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.241808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.242473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.242497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.245996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.246068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.246136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.246204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.246770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.246857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.246942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.247025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.247092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.247630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.247658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.251120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.251209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.251786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.251854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.252464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.252543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.252609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.252675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.252744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.253357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.253380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.256626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.256708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.256776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.257356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.257926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.258022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.258087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.258154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.258222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.258848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.258870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.262751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.263324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.263895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.264465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.265108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.265683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.266256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.266821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.267398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.268017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.268040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.271833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.272410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.272985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.273542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.274173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.274751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.275342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.275901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.276468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.277098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.277124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.280857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.281437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.282019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.282578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.283176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.283750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.284341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.284905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.285470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.285942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.285978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.289480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.290056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.290622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.952 [2024-07-15 18:49:35.291193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.291807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.292393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.294782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.297470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.299722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.300136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.300158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.303263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.303898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.306269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.308990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.309386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.311251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.313499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.316209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.317211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.317861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.317883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.323311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.325014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.327063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.329668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.330075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.330656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.331222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.331776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.334028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.334424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.334446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.339443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.340021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.340586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.341161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.341604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.343984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.346708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.348312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.350685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.351091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.351113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.355814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.358153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.360857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.362488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.362978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.365703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.368048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.368615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.369192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.369819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.369845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.374414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.377100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.379742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.380314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.380920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.381517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.383401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.385654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.388354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.388893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.388915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.391850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.392431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.394144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.396527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.396924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.398973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.401349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.404050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.406529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.407154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.407176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.412762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.414792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.417191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.419886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.420293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.421087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.421651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.422239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.424015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.424441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.424462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.429329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.430398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.430967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.431528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.432069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.434430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.437040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.439011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.441416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.441822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.441843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.445422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.447791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.450483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.953 [2024-07-15 18:49:35.452595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.954 [2024-07-15 18:49:35.452998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.954 [2024-07-15 18:49:35.455375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.954 [2024-07-15 18:49:35.458073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.954 [2024-07-15 18:49:35.458923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.954 [2024-07-15 18:49:35.459492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.954 [2024-07-15 18:49:35.460161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.954 [2024-07-15 18:49:35.460184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.954 [2024-07-15 18:49:35.464476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.954 [2024-07-15 18:49:35.467001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.954 [2024-07-15 18:49:35.469720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.954 [2024-07-15 18:49:35.470343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.954 [2024-07-15 18:49:35.470963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.954 [2024-07-15 18:49:35.471541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.954 [2024-07-15 18:49:35.472705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.954 [2024-07-15 18:49:35.475071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.954 [2024-07-15 18:49:35.477580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.954 [2024-07-15 18:49:35.478014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.954 [2024-07-15 18:49:35.478036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.954 [2024-07-15 18:49:35.480879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.954 [2024-07-15 18:49:35.481456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.954 [2024-07-15 18:49:35.482652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.954 [2024-07-15 18:49:35.485007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.954 [2024-07-15 18:49:35.485407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.954 [2024-07-15 18:49:35.487627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.954 [2024-07-15 18:49:35.489887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.954 [2024-07-15 18:49:35.492570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.954 [2024-07-15 18:49:35.495258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.954 [2024-07-15 18:49:35.495842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.954 [2024-07-15 18:49:35.495865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:49.954 [2024-07-15 18:49:35.501499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.214 [2024-07-15 18:49:35.503758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.214 [2024-07-15 18:49:35.506014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.214 [2024-07-15 18:49:35.508698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.214 [2024-07-15 18:49:35.509106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.214 [2024-07-15 18:49:35.510236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.214 [2024-07-15 18:49:35.510798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.214 [2024-07-15 18:49:35.511357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.513086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.513516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.513538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.518444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.519519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.520088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.520651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.521229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.523595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.526293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.528527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.530780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.531183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.531205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.534677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.537063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.539747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.541990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.542397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.544700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.547394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.548333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.548896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.549524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.549547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.553535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.555903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.558610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.560004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.560646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.561229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.561787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.564155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.566860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.567263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.567296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.570168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.570749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.571319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.573682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.574085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.576812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.578433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.580806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.583537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.584027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.584050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.589636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.592333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.593971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.596346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.596742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.599008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.599575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.600148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.600704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.601109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.601132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.606113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.608711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.609283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.609844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.610475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.612408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.614684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.617389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.619027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.619465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.619486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.622726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.624370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.626749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.629470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.630022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.632439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.635140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.637637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.638206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.638816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.638844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.643711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.645981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.648676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.651372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.651935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.652517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.653087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.654602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.656968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.657365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.657386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.660977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.661543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.662110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.663599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.664064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.666780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.669030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.215 [2024-07-15 18:49:35.671284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.673992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.674389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.674411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.680113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.682676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.684302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.686665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.687069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.687704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.688274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.688832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.691084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.691481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.691503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.696506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.697280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.697846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.698416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.699022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.699610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.700176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.700738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.701309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.701883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.701905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.705774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.706357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.706916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.707486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.708099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.708676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.709247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.709811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.710391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.710887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.710908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.714692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.715273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.715838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.716410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.717084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.717682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.718255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.718815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.719392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.719980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.720004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.723701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.724277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.724857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.725434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.726019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.726597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.727174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.727740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.728311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.728913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.728935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.732601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.733180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.733253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.733816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.734378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.734960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.735518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.736083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.736647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.737207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.737230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.741067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.741637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.742204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.742282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.742873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.743471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.744042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.744607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.745179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.745740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.745762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.749071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.749151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.749237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.749314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.749890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.749984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.750051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.750120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.750202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.750849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.750871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.753995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.754066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.754133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.754200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.754816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.754892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.754995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.216 [2024-07-15 18:49:35.755060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.217 [2024-07-15 18:49:35.755127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.217 [2024-07-15 18:49:35.755735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.217 [2024-07-15 18:49:35.755757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.217 [2024-07-15 18:49:35.758914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.217 [2024-07-15 18:49:35.758995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.217 [2024-07-15 18:49:35.759065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.217 [2024-07-15 18:49:35.759133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.217 [2024-07-15 18:49:35.759715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.217 [2024-07-15 18:49:35.759802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.217 [2024-07-15 18:49:35.759889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.217 [2024-07-15 18:49:35.759972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.217 [2024-07-15 18:49:35.760041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.217 [2024-07-15 18:49:35.760603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.217 [2024-07-15 18:49:35.760625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.217 [2024-07-15 18:49:35.763996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.217 [2024-07-15 18:49:35.764078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.217 [2024-07-15 18:49:35.764165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.217 [2024-07-15 18:49:35.764242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.764789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.764874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.764939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.765018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.765088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.765648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.765670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.768901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.768978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.769048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.769119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.769725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.769807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.769875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.769944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.770019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.770527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.770549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.773781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.773849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.773919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.774006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.774607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.774714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.774791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.774858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.774928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.775544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.775570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.778771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.778852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.778922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.778997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.779587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.779665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.779732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.779816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.779895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.780546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.780570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.783823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.783891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.783972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.784041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.784633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.784721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.784792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.784897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.784970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.785537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.785559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.788833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.788905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.788996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.789075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.789670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.789761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.789824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.789893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.789975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.790579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.790602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.793828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.793899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.793973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.794044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.794618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.794706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.794771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.794841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.794910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.795460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.795483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.798417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.798489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.798574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.798651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.799309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.799390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.799457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.799525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.799597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.800198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.800221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.803509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.803581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.803653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.479 [2024-07-15 18:49:35.803719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.804160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.804243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.804307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.804375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.804450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.804845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.804867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.807189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.807260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.807326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.807401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.807942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.808042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.808109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.808179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.808249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.808814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.808836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.811448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.811518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.811593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.811664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.812231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.812316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.812380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.812449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.812515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.812937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.812966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.815823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.815891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.815965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.816035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.816628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.816704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.816774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.816842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.816913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.817357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.817379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.819667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.819746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.819821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.819889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.820296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.820377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.820442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.820517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.820588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.821197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.821224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.824121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.824188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.824255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.824331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.824740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.824828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.824891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.824965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.825032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.825598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.825620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.828126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.828197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.828265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.828333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.828868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.828965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.829029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.829097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.829166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.829768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.829789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.832104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.832173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.832239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.832306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.832705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.832787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.832861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.832928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.833010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.833405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.833425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.836691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.836763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.836839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.836905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.837349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.837432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.837497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.837564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.837638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.838042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.838065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.840379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.840445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.480 [2024-07-15 18:49:35.840515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.840590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.841140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.841231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.841296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.841366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.841434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.841995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.842019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.844628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.844697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.844763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.844829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.845368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.845452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.845527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.845593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.845660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.846100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.846122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.849007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.849075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.849142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.849210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.849815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.849891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.849969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.850037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.850103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.850534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.850556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.852847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.852931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.853015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.853084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.853478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.853561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.853627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.853694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.853768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.854319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.854342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.857258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.857325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.857392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.857475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.857869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.857961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.858026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.858094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.858160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.858679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.858700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.861156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.861226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.861299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.861368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.861915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.862009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.862074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.862141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.862209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.862818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.862841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.865143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.865211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.865278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.865345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.865810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.865890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.865962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.866041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.866106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.866501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.866521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.869726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.869817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.869895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.869968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.870444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.870525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.870589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.870657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.870725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.871131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.871152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.873426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.873493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.873560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.873634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.874253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.874344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.874408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.874477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.874547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.875137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.875159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.877850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.877919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.877993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.878064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.481 [2024-07-15 18:49:35.878663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.878745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.878812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.878880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.878946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.879398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.879420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.882186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.882256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.882824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.882892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.883302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.883387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.883457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.883523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.883590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.883999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.884024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.886393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.886464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.886535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.887764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.888392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.888478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.888543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.888611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.888680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.889297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.889320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.893718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.896151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.898849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.899655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.900258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.900833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.901711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.904356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.906854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.907294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.907317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.910158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.910728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.911532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.913892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.914302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.917046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.919085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.921461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.924169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.924752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.924774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.930473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.933168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.935157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.937495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.937898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.939627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.940205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.940772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.941697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.942135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.942157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.947146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.949165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.949749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.950323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.950934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.953211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.955916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.958617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.960436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.960889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.960910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.964401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.966652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.969343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.972034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.972591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.974977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.977579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.979534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.980133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.980655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.980678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.984792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.987157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.989863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.992134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.992700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.993291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.993852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.996043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.998608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.999016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:35.999038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:36.001967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:36.002544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:36.003120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:36.005374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.482 [2024-07-15 18:49:36.005778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.483 [2024-07-15 18:49:36.008486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.483 [2024-07-15 18:49:36.010104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.483 [2024-07-15 18:49:36.012487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.483 [2024-07-15 18:49:36.015179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.483 [2024-07-15 18:49:36.015582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.483 [2024-07-15 18:49:36.015604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.483 [2024-07-15 18:49:36.021341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.483 [2024-07-15 18:49:36.023866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.483 [2024-07-15 18:49:36.025737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.743 [2024-07-15 18:49:36.028067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.743 [2024-07-15 18:49:36.028471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.743 [2024-07-15 18:49:36.031192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.743 [2024-07-15 18:49:36.031878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.743 [2024-07-15 18:49:36.032442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.743 [2024-07-15 18:49:36.033041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.743 [2024-07-15 18:49:36.033554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.743 [2024-07-15 18:49:36.033576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.743 [2024-07-15 18:49:36.038436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.743 [2024-07-15 18:49:36.041138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.743 [2024-07-15 18:49:36.041854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.042433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.043033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.044065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.046447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.048965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.050825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.051231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.051254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.054432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.055118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.057482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.060181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.060584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.062486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.064748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.067450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.068441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.069080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.069104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.074564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.076201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.078564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.081298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.081814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.082404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.082981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.083545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.085924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.086331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.086354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.090919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.091508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.092085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.092650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.093058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.095512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.098216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.099836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.102219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.102620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.102642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.107399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.109910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.111470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.114165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.114569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.115820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.116391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.116956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.118230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.118672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.118694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.123747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.125834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.126407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.126982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.127594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.128180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.128759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.129331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.129894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.130492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.130515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.134321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.134894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.135466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.136041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.136631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.137217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.137790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.138367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.138927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.139467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.139490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.143225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.143804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.144377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.144939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.145521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.146122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.146689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.147261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.147830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.148363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.148387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.152336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.152912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.153486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.154060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.154554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.155142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.155708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.156281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.156846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.157419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.157442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.161286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.161856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.744 [2024-07-15 18:49:36.162436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.163019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.163610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.164202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.164767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.165338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.165905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.166493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.166517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.170485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.171069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.171640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.172214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.172803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.173941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.175239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.176040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.177677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.178277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.178302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.183465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.184051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.184615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.185612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.186078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.186657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.188922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.189496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.190075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.190623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.190644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.194125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.194704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.196633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.197202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.197600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.198184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.198749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.199356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.201169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.201757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.201779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.206846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.207428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.209381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.209944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.210537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.211129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.213410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.213982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.215864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.216444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.216468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.220018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.221904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.222477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.223056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.223574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.225008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.225680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.227438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.228009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.228601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.228623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.231913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.232496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.233424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.234924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.235521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.237795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.238365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.238925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.239977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.240446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.240467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.244054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.246342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.246902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.249176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.249827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.250543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.252226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.254481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.256747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.257399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.257422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.260863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.261437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.262010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.262576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.263173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.263751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.264322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.264893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.267282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.267685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.267707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.272310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.272883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.273460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.274029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.745 [2024-07-15 18:49:36.274429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.746 [2024-07-15 18:49:36.276850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.746 [2024-07-15 18:49:36.279557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.746 [2024-07-15 18:49:36.281209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.746 [2024-07-15 18:49:36.283587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.746 [2024-07-15 18:49:36.283991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.746 [2024-07-15 18:49:36.284014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.746 [2024-07-15 18:49:36.288683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.746 [2024-07-15 18:49:36.291074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:50.746 [2024-07-15 18:49:36.293775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.295383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.295868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.298586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.300985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.301556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.302132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.302748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.302771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.307606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.310310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.312765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.313340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.313933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.314534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.316264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.318618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.321380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.321977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.322000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.324904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.325482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.327048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.329383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.329784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.332020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.334308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.337013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.339611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.340212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.340234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.345849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.348103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.348176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.350567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.350973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.353263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.354750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.355321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.355888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.356497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.356522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.361161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.363705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.365567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.365652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.366287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.366863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.367435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.369799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.372518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.372918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.372940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.375451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.375534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.375619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.375686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.376299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.376376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.376444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.376514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.376583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.377193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.377216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.379593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.379664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.379732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.379800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.380215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.380302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.380369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.380436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.380503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.380899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.008 [2024-07-15 18:49:36.380920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.383986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.384063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.384132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.384201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.384615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.384693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.384768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.384849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.384915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.385322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.385343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.387693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.387772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.387844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.387911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.388451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.388537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.388615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.388684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.388752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.389343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.389367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.392210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.392278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.392346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.392423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.392846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.392935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.393006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.393079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.393152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.393587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.393608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.396315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.396384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.396472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.396547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.397185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.397262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.397330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.397399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.397470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.397867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.397888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.400344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.400424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.400492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.400568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.400973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.401055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.401120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.401186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.401253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.401772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.401793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.404852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.404923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.405001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.405068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.405466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.405548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.405618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.405706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.405774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.406269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.406290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.408719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.408788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.408854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.408923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.409539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.409623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.409700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.409767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.409836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.410440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.410466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.412824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.412892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.412967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.413037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.413509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.413591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.413654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.413722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.413817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.414219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.414241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.417379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.417453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.417523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.417596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.418002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.418087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.418159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.418226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.418293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.418691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.009 [2024-07-15 18:49:36.418712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.421058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.421129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.421202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.421268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.421852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.421936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.422009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.422076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.422144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.422733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.422754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.425669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.425740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.425816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.425882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.426363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.426453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.426520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.426587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.426653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.427088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.427109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.429840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.429915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.429989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.430057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.430657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.430733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.430801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.430871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.430942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.431423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.431444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.433856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.433923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.434007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.434076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.434476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.434558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.434622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.434688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.434755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.435371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.435394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.438347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.438414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.438487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.438554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.438960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.439045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.439109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.439176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.439243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.439791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.439812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.442396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.442466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.442534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.442603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.443192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.443272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.443339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.443406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.443476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.444079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.444100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.446416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.446482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.446549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.446623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.447028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.447112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.447176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.447243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.447310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.447705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.447726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.451109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.451177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.451243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.451310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.451776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.451858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.451922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.452013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.452080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.452476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.452497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.454850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.454926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.455009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.455079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.455721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.455796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.455864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.455935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.456008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.456606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.010 [2024-07-15 18:49:36.456627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.459152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.459231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.459304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.459376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.459776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.459868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.459933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.460013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.460081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.460479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.460499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.463445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.463516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.463584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.463652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.464089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.464173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.464249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.464319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.464390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.464789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.464810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.467263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.467339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.467405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.467472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.468067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.468151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.468216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.468283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.468352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.468940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.468971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.471810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.471880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.471968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.472035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.472497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.472578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.472648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.472716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.472783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.473232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.473254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.476075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.476145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.476222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.476290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.476885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.476975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.477042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.477115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.477182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.477617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.477637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.479995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.480063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.480142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.480213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.480614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.480695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.480761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.480827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.480893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.481497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.481521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.484551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.484620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.484686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.484784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.485188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.485271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.485336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.485402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.485469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.486014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.486036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.488548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.488616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.488690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.488758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.489341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.489416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.489484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.489556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.489625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.490231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.490252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.492642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.492711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.492785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.492857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.493268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.493355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.493420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.493487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.493554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.493960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.493982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.011 [2024-07-15 18:49:36.497380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.497449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.497516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.497583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.498023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.498106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.498190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.498256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.498329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.498725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.498746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.501201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.501271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.501841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.501907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.502512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.502589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.502655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.502724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.502794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.503377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.503410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.505828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.505905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.505984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.508684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.509089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.509173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.509238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.509314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.509382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.509978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.510000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.515164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.517416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.519982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.521486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.521886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.522468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.524735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.525304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.527381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.527784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.527808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.532858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.533438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.534003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.534562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.534972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.537236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.539932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.541083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.543433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.543828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.543850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.547423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.548001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.548563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.549130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.549732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.550318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.550885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.551450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.552016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.552577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.012 [2024-07-15 18:49:36.552599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.556509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.557093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.557651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.558222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.558739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.559323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.559885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.560453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.561027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.561623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.561646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.565454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.566031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.566592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.567162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.567659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.568242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.568803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.569365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.569929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.570529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.570552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.574236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.574806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.575372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.575935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.576537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.577119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.577680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.578252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.578819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.579483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.579506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.583177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.583751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.584317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.584882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.585508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.586084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.586644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.587216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.587788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.588382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.588405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.592357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.273 [2024-07-15 18:49:36.592933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.593502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.594090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.594693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.595274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.595831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.596404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.597000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.597581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.597603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.601458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.602037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.602604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.603188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.603748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.604327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.604891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.605471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.606040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.606664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.606697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.610579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.611175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.611739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.612306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.612932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.613512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.614086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.614663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.615227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.615819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.615842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.619607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.620188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.620758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.621323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.621915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.622495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.623071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.623633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.624198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.624708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.624731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.628626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.629217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.629781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.630347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.630828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.632017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.634098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.634661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.635228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.635822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.635844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.639564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.640143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.640703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.641266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.641860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.644124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.646823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.649526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.651344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.651783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.651804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.655317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.657572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.660269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.662978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.663491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.665885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.668397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.670105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.670665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.671232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.671254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.675355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.677729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.680235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.681838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.682486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.683071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.683631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.686007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.688706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.689109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.689130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.692094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.692670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.693377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.695753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.696157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.698893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.700914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.703304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.274 [2024-07-15 18:49:36.706010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.706576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.706599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.712307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.714842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.717093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.719712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.720116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.721564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.722129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.722691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.724096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.724528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.724549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.729450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.730721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.731299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.731857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.732419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.734791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.737475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.739485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.741902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.742308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.742331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.746347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.748728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.751232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.752980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.753378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.756111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.758811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.759385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.759953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.760594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.760618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.765228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.767943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.770639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.771214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.771820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.772397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.774011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.776384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.779114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.779650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.779672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.782665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.783260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.785263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.787645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.788052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.789736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.792109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.794789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.797038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.797611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.797632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.802979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.804580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.806956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.809671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.810072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.810655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.811221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.811780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.814023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.814420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.814443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.819422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.820008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.820570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.821136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.275 [2024-07-15 18:49:36.821535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.536 [2024-07-15 18:49:36.823797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.536 [2024-07-15 18:49:36.826486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.536 [2024-07-15 18:49:36.828112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.536 [2024-07-15 18:49:36.830471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.536 [2024-07-15 18:49:36.830871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.536 [2024-07-15 18:49:36.830892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.536 [2024-07-15 18:49:36.836317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.536 [2024-07-15 18:49:36.839040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.536 [2024-07-15 18:49:36.841714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.536 [2024-07-15 18:49:36.843429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.536 [2024-07-15 18:49:36.843864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.536 [2024-07-15 18:49:36.846569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.536 [2024-07-15 18:49:36.848604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.536 [2024-07-15 18:49:36.849171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.536 [2024-07-15 18:49:36.849738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.536 [2024-07-15 18:49:36.850362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.536 [2024-07-15 18:49:36.850385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.536 [2024-07-15 18:49:36.855092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.536 [2024-07-15 18:49:36.857600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.536 [2024-07-15 18:49:36.859381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.536 [2024-07-15 18:49:36.859945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.536 [2024-07-15 18:49:36.860538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.861124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.863494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.866205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.868649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.869115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.869137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.872326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.872893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.875254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.877945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.878349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.880005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.882407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.885115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.886310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.886936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.886966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.892505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.894589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.897039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.899737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.900315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.900896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.901468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.902728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.905088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.905486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.905508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.909188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.909772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.910342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.911895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.912376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.915089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.917267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.919562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.922268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.922666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.922688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.928490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.930989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.932796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.935169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.935576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.938303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.938878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.939449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.940017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.940478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.940499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.945578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.948287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.948856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.949421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.950064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.951941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.954205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.956910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.958535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.959007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.959029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.962357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.964524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.964602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.967320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.967718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.969355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.971737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.974311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.976201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.976859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.976883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.982459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.984100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.986468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.986547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.986959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.988863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.989443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.990008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.990977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.991415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.991436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.993812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.993890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.993963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.994031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.994429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.994510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.994574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.994644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.994720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.995380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.995402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.998333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.537 [2024-07-15 18:49:36.998407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:36.998479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:36.998545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:36.998944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:36.999034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:36.999099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:36.999174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:36.999242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:36.999681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:36.999708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.002459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.002530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.002598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.002666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.003353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.003431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.003502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.003571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.003639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.004102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.004124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.006470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.006540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.006606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.006672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.007254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.007336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.007404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.007471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.007540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.008184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.008207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.010893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.010968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.011035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.011104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.011700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.011787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.011850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.011922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.011995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.012394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.012415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.015435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.015505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.015574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.015642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.016153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.016235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.016301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.016368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.016442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.016839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.016861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.019330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.019399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.019466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.019541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.019938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.020031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.020096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.020164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.020233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.020879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.020901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.024294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.024377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.024451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.024519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.025134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.025218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.025287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.025356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.025424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.026056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.026078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.029415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.029502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.029574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.029672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.030267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.030359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.030424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.030492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.030563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.031232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.031255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.034592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.034662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.034730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.034800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.035417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.035493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.035561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.035630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.035701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.036297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.036319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.039723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.538 [2024-07-15 18:49:37.039793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.039882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.039964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.040577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.040668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.040732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.040802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.040872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.041583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.041607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.044945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.045022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.045100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.045168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.045732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.045807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.045873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.045942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.046019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.046637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.046658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.050066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.050136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.050204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.050287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.050831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.050939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.051025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.051093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.051162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.051777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.051799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.055069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.055139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.055207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.055274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.055865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.055941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.056016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.056084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.056152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.056740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.056762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.060113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.060182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.060250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.060318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.060912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.061008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.061083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.061182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.061253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.061918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.061940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.065201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.065270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.065342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.065411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.066058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.066154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.066218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.066286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.066360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.066995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.067018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.071454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.071529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.071598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.071665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.072277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.072358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.072427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.072519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.072582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.073244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.073268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.077611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.077690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.077759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.077837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.078505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.078583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.078650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.078719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.078788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.079416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.079439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.083831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.083939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.084016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.084086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.084727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.084803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.084880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.084957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.085029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.085663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.539 [2024-07-15 18:49:37.085685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.090093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.090178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.090271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.090348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.090937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.091021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.091090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.091161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.091230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.091773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.091795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.096240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.096316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.096407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.096482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.097083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.097175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.097237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.097306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.097375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.097998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.098019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.102357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.102455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.102528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.102597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.103214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.103310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.103375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.103442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.103509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.104123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.104146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.108516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.108602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.108688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.108768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.109281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.109369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.109434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.109531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.109594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.110233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:51.799 [2024-07-15 18:49:37.110259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:54.332 00:34:54.332 Latency(us) 00:34:54.332 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:54.332 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:54.332 Verification LBA range: start 0x0 length 0x100 00:34:54.332 crypto_ram : 6.22 41.17 2.57 0.00 0.00 3025849.78 216705.71 2732289.46 00:34:54.332 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:54.332 Verification LBA range: start 0x100 length 0x100 00:34:54.332 crypto_ram : 5.95 24.18 1.51 0.00 0.00 4915466.78 14168.26 3675009.22 00:34:54.332 Job: crypto_ram1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:54.332 Verification LBA range: start 0x0 length 0x100 00:34:54.332 crypto_ram1 : 6.22 41.17 2.57 0.00 0.00 2910061.71 215707.06 2524571.55 00:34:54.332 Job: crypto_ram1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:54.332 Verification LBA range: start 0x100 length 0x100 00:34:54.332 crypto_ram1 : 5.99 26.03 1.63 0.00 0.00 4345606.54 21221.18 3339464.90 00:34:54.332 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:54.332 Verification LBA range: start 0x0 length 0x100 00:34:54.332 crypto_ram2 : 5.65 236.39 14.77 0.00 0.00 477376.13 41443.72 607175.44 00:34:54.332 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:54.332 Verification LBA range: start 0x100 length 0x100 00:34:54.332 crypto_ram2 : 5.83 181.26 11.33 0.00 0.00 599685.19 21595.67 723018.12 00:34:54.332 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:54.332 Verification LBA range: start 0x0 length 0x100 00:34:54.332 crypto_ram3 : 5.80 246.99 15.44 0.00 0.00 440179.22 15541.39 547256.81 00:34:54.332 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:54.332 Verification LBA range: start 0x100 length 0x100 00:34:54.332 crypto_ram3 : 5.99 208.38 13.02 0.00 0.00 511280.77 9237.46 782936.75 00:34:54.332 =================================================================================================================== 00:34:54.332 Total : 1005.57 62.85 0.00 0.00 925202.81 9237.46 3675009.22 00:34:54.621 00:34:54.621 real 0m9.271s 00:34:54.621 user 0m17.744s 00:34:54.621 sys 0m0.352s 00:34:54.621 18:49:40 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:54.621 18:49:40 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:34:54.621 ************************************ 00:34:54.621 END TEST bdev_verify_big_io 00:34:54.621 ************************************ 00:34:54.621 18:49:40 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:34:54.621 18:49:40 blockdev_crypto_qat -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:54.621 18:49:40 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:34:54.621 18:49:40 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:54.621 18:49:40 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:34:54.621 ************************************ 00:34:54.621 START TEST bdev_write_zeroes 00:34:54.621 ************************************ 00:34:54.621 18:49:40 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:54.621 [2024-07-15 18:49:40.132996] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:34:54.621 [2024-07-15 18:49:40.133056] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3009646 ] 00:34:54.880 [2024-07-15 18:49:40.230267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:54.880 [2024-07-15 18:49:40.326645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:54.880 [2024-07-15 18:49:40.347936] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:34:54.880 [2024-07-15 18:49:40.355973] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:34:54.880 [2024-07-15 18:49:40.363990] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:34:55.137 [2024-07-15 18:49:40.469963] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:34:57.671 [2024-07-15 18:49:42.646235] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:34:57.671 [2024-07-15 18:49:42.646295] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:34:57.671 [2024-07-15 18:49:42.646308] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:57.671 [2024-07-15 18:49:42.654255] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:34:57.671 [2024-07-15 18:49:42.654272] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:34:57.671 [2024-07-15 18:49:42.654280] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:57.671 [2024-07-15 18:49:42.662275] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:34:57.671 [2024-07-15 18:49:42.662290] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:34:57.671 [2024-07-15 18:49:42.662304] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:57.671 [2024-07-15 18:49:42.670296] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:34:57.671 [2024-07-15 18:49:42.670312] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:34:57.671 [2024-07-15 18:49:42.670320] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:57.671 Running I/O for 1 seconds... 00:34:58.255 00:34:58.255 Latency(us) 00:34:58.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:58.255 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:58.255 crypto_ram : 1.03 1840.14 7.19 0.00 0.00 69008.91 6116.69 83386.76 00:34:58.255 Job: crypto_ram1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:58.255 crypto_ram1 : 1.03 1853.28 7.24 0.00 0.00 68165.49 6085.49 77394.90 00:34:58.255 Job: crypto_ram2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:58.255 crypto_ram2 : 1.02 14115.13 55.14 0.00 0.00 8924.26 2668.25 11671.65 00:34:58.255 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:58.255 crypto_ram3 : 1.02 14147.85 55.27 0.00 0.00 8872.32 2668.25 9299.87 00:34:58.255 =================================================================================================================== 00:34:58.255 Total : 31956.41 124.83 0.00 0.00 15829.08 2668.25 83386.76 00:34:58.822 00:34:58.822 real 0m4.052s 00:34:58.822 user 0m3.716s 00:34:58.822 sys 0m0.292s 00:34:58.822 18:49:44 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:58.822 18:49:44 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:34:58.822 ************************************ 00:34:58.822 END TEST bdev_write_zeroes 00:34:58.822 ************************************ 00:34:58.822 18:49:44 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:34:58.822 18:49:44 blockdev_crypto_qat -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:58.822 18:49:44 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:34:58.822 18:49:44 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:58.822 18:49:44 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:34:58.822 ************************************ 00:34:58.822 START TEST bdev_json_nonenclosed 00:34:58.822 ************************************ 00:34:58.822 18:49:44 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:58.822 [2024-07-15 18:49:44.225081] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:34:58.822 [2024-07-15 18:49:44.225139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3010289 ] 00:34:58.822 [2024-07-15 18:49:44.320894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:59.081 [2024-07-15 18:49:44.413003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:59.081 [2024-07-15 18:49:44.413066] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:34:59.081 [2024-07-15 18:49:44.413082] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:59.081 [2024-07-15 18:49:44.413091] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:59.081 00:34:59.081 real 0m0.337s 00:34:59.081 user 0m0.227s 00:34:59.081 sys 0m0.108s 00:34:59.081 18:49:44 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:34:59.081 18:49:44 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:59.081 18:49:44 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:34:59.081 ************************************ 00:34:59.081 END TEST bdev_json_nonenclosed 00:34:59.081 ************************************ 00:34:59.081 18:49:44 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 234 00:34:59.081 18:49:44 blockdev_crypto_qat -- bdev/blockdev.sh@782 -- # true 00:34:59.081 18:49:44 blockdev_crypto_qat -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:59.081 18:49:44 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:34:59.081 18:49:44 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:59.081 18:49:44 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:34:59.081 ************************************ 00:34:59.081 START TEST bdev_json_nonarray 00:34:59.081 ************************************ 00:34:59.081 18:49:44 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:59.081 [2024-07-15 18:49:44.607160] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:34:59.081 [2024-07-15 18:49:44.607219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3010394 ] 00:34:59.340 [2024-07-15 18:49:44.704161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:59.340 [2024-07-15 18:49:44.795700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:59.340 [2024-07-15 18:49:44.795770] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:34:59.340 [2024-07-15 18:49:44.795787] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:59.340 [2024-07-15 18:49:44.795796] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:59.340 00:34:59.340 real 0m0.339s 00:34:59.340 user 0m0.217s 00:34:59.340 sys 0m0.120s 00:34:59.340 18:49:44 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:34:59.340 18:49:44 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:59.340 18:49:44 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:34:59.340 ************************************ 00:34:59.340 END TEST bdev_json_nonarray 00:34:59.340 ************************************ 00:34:59.600 18:49:44 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 234 00:34:59.600 18:49:44 blockdev_crypto_qat -- bdev/blockdev.sh@785 -- # true 00:34:59.600 18:49:44 blockdev_crypto_qat -- bdev/blockdev.sh@787 -- # [[ crypto_qat == bdev ]] 00:34:59.600 18:49:44 blockdev_crypto_qat -- bdev/blockdev.sh@794 -- # [[ crypto_qat == gpt ]] 00:34:59.600 18:49:44 blockdev_crypto_qat -- bdev/blockdev.sh@798 -- # [[ crypto_qat == crypto_sw ]] 00:34:59.600 18:49:44 blockdev_crypto_qat -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:34:59.600 18:49:44 blockdev_crypto_qat -- bdev/blockdev.sh@811 -- # cleanup 00:34:59.600 18:49:44 blockdev_crypto_qat -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:34:59.600 18:49:44 blockdev_crypto_qat -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:34:59.600 18:49:44 blockdev_crypto_qat -- bdev/blockdev.sh@26 -- # [[ crypto_qat == rbd ]] 00:34:59.600 18:49:44 blockdev_crypto_qat -- bdev/blockdev.sh@30 -- # [[ crypto_qat == daos ]] 00:34:59.600 18:49:44 blockdev_crypto_qat -- bdev/blockdev.sh@34 -- # [[ crypto_qat = \g\p\t ]] 00:34:59.600 18:49:44 blockdev_crypto_qat -- bdev/blockdev.sh@40 -- # [[ crypto_qat == xnvme ]] 00:34:59.600 00:34:59.600 real 1m12.278s 00:34:59.600 user 2m50.149s 00:34:59.600 sys 0m7.144s 00:34:59.600 18:49:44 blockdev_crypto_qat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:59.600 18:49:44 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:34:59.600 ************************************ 00:34:59.600 END TEST blockdev_crypto_qat 00:34:59.600 ************************************ 00:34:59.600 18:49:44 -- common/autotest_common.sh@1142 -- # return 0 00:34:59.600 18:49:44 -- spdk/autotest.sh@360 -- # run_test chaining /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/chaining.sh 00:34:59.600 18:49:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:59.600 18:49:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:59.600 18:49:44 -- common/autotest_common.sh@10 -- # set +x 00:34:59.600 ************************************ 00:34:59.600 START TEST chaining 00:34:59.600 ************************************ 00:34:59.600 18:49:44 chaining -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/chaining.sh 00:34:59.600 * Looking for test storage... 00:34:59.600 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:34:59.600 18:49:45 chaining -- bdev/chaining.sh@14 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:34:59.600 18:49:45 chaining -- nvmf/common.sh@7 -- # uname -s 00:34:59.600 18:49:45 chaining -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:59.600 18:49:45 chaining -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:59.600 18:49:45 chaining -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:59.600 18:49:45 chaining -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:59.600 18:49:45 chaining -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:59.600 18:49:45 chaining -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:59.600 18:49:45 chaining -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:59.600 18:49:45 chaining -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:59.600 18:49:45 chaining -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:59.600 18:49:45 chaining -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:59.600 18:49:45 chaining -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b98b40-9a1d-eb11-906e-0017a4403562 00:34:59.600 18:49:45 chaining -- nvmf/common.sh@18 -- # NVME_HOSTID=80b98b40-9a1d-eb11-906e-0017a4403562 00:34:59.600 18:49:45 chaining -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:59.600 18:49:45 chaining -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:59.600 18:49:45 chaining -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:34:59.600 18:49:45 chaining -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:59.600 18:49:45 chaining -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:34:59.600 18:49:45 chaining -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:59.600 18:49:45 chaining -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:59.600 18:49:45 chaining -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:59.600 18:49:45 chaining -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.600 18:49:45 chaining -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.600 18:49:45 chaining -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.600 18:49:45 chaining -- paths/export.sh@5 -- # export PATH 00:34:59.600 18:49:45 chaining -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.600 18:49:45 chaining -- nvmf/common.sh@47 -- # : 0 00:34:59.600 18:49:45 chaining -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:59.600 18:49:45 chaining -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:59.600 18:49:45 chaining -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:59.600 18:49:45 chaining -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:59.600 18:49:45 chaining -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:59.600 18:49:45 chaining -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:59.600 18:49:45 chaining -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:59.600 18:49:45 chaining -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:59.600 18:49:45 chaining -- bdev/chaining.sh@16 -- # nqn=nqn.2016-06.io.spdk:cnode0 00:34:59.601 18:49:45 chaining -- bdev/chaining.sh@17 -- # key0=(00112233445566778899001122334455 11223344556677889900112233445500) 00:34:59.601 18:49:45 chaining -- bdev/chaining.sh@18 -- # key1=(22334455667788990011223344550011 33445566778899001122334455001122) 00:34:59.601 18:49:45 chaining -- bdev/chaining.sh@19 -- # bperfsock=/var/tmp/bperf.sock 00:34:59.601 18:49:45 chaining -- bdev/chaining.sh@20 -- # declare -A stats 00:34:59.601 18:49:45 chaining -- bdev/chaining.sh@66 -- # nvmftestinit 00:34:59.601 18:49:45 chaining -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:59.601 18:49:45 chaining -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:59.601 18:49:45 chaining -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:59.601 18:49:45 chaining -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:59.601 18:49:45 chaining -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:59.601 18:49:45 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:59.601 18:49:45 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:59.601 18:49:45 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:59.601 18:49:45 chaining -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:34:59.601 18:49:45 chaining -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:59.601 18:49:45 chaining -- nvmf/common.sh@285 -- # xtrace_disable 00:34:59.601 18:49:45 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:06.171 18:49:51 chaining -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:06.171 18:49:51 chaining -- nvmf/common.sh@291 -- # pci_devs=() 00:35:06.171 18:49:51 chaining -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:06.171 18:49:51 chaining -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:06.171 18:49:51 chaining -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:06.171 18:49:51 chaining -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:06.171 18:49:51 chaining -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:06.171 18:49:51 chaining -- nvmf/common.sh@295 -- # net_devs=() 00:35:06.171 18:49:51 chaining -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:06.171 18:49:51 chaining -- nvmf/common.sh@296 -- # e810=() 00:35:06.171 18:49:51 chaining -- nvmf/common.sh@296 -- # local -ga e810 00:35:06.171 18:49:51 chaining -- nvmf/common.sh@297 -- # x722=() 00:35:06.171 18:49:51 chaining -- nvmf/common.sh@297 -- # local -ga x722 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@298 -- # mlx=() 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@298 -- # local -ga mlx 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@335 -- # (( 0 == 0 )) 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@336 -- # return 1 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@423 -- # [[ phy-fallback == phy ]] 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@426 -- # [[ phy-fallback == phy-fallback ]] 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@427 -- # echo 'WARNING: No supported devices were found, fallback requested for tcp test' 00:35:06.172 WARNING: No supported devices were found, fallback requested for tcp test 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@432 -- # nvmf_veth_init 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:35:06.172 Cannot find device "nvmf_tgt_br" 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@155 -- # true 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:35:06.172 Cannot find device "nvmf_tgt_br2" 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@156 -- # true 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:35:06.172 Cannot find device "nvmf_tgt_br" 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@158 -- # true 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:35:06.172 Cannot find device "nvmf_tgt_br2" 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@159 -- # true 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:06.172 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@162 -- # true 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:06.172 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@163 -- # true 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:35:06.172 18:49:51 chaining -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:35:06.432 18:49:51 chaining -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:35:06.432 18:49:51 chaining -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:35:06.432 18:49:51 chaining -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:35:06.432 18:49:51 chaining -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:35:06.432 18:49:51 chaining -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:06.432 18:49:51 chaining -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:35:06.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:06.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:35:06.432 00:35:06.432 --- 10.0.0.2 ping statistics --- 00:35:06.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:06.432 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:35:06.432 18:49:51 chaining -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:35:06.432 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:06.432 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:35:06.432 00:35:06.432 --- 10.0.0.3 ping statistics --- 00:35:06.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:06.432 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:35:06.432 18:49:51 chaining -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:35:06.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:06.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:35:06.432 00:35:06.432 --- 10.0.0.1 ping statistics --- 00:35:06.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:06.432 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:35:06.432 18:49:51 chaining -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:06.432 18:49:51 chaining -- nvmf/common.sh@433 -- # return 0 00:35:06.432 18:49:51 chaining -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:06.432 18:49:51 chaining -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:06.432 18:49:51 chaining -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:06.432 18:49:51 chaining -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:06.432 18:49:51 chaining -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:06.432 18:49:51 chaining -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:06.432 18:49:51 chaining -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:06.432 18:49:51 chaining -- bdev/chaining.sh@67 -- # nvmfappstart -m 0x2 00:35:06.432 18:49:51 chaining -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:06.432 18:49:51 chaining -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:06.432 18:49:51 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:06.432 18:49:51 chaining -- nvmf/common.sh@481 -- # nvmfpid=3014239 00:35:06.432 18:49:51 chaining -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:06.432 18:49:51 chaining -- nvmf/common.sh@482 -- # waitforlisten 3014239 00:35:06.432 18:49:51 chaining -- common/autotest_common.sh@829 -- # '[' -z 3014239 ']' 00:35:06.432 18:49:51 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:06.432 18:49:51 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:06.432 18:49:51 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:06.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:06.432 18:49:51 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:06.432 18:49:51 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:06.432 [2024-07-15 18:49:51.983578] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:35:06.433 [2024-07-15 18:49:51.983636] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:06.691 [2024-07-15 18:49:52.092750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:06.691 [2024-07-15 18:49:52.204991] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:06.691 [2024-07-15 18:49:52.205038] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:06.691 [2024-07-15 18:49:52.205051] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:06.691 [2024-07-15 18:49:52.205062] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:06.691 [2024-07-15 18:49:52.205072] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:06.691 [2024-07-15 18:49:52.205097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:07.626 18:49:52 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:07.626 18:49:52 chaining -- common/autotest_common.sh@862 -- # return 0 00:35:07.626 18:49:52 chaining -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:07.626 18:49:52 chaining -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:07.626 18:49:52 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:07.626 18:49:52 chaining -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:07.626 18:49:52 chaining -- bdev/chaining.sh@69 -- # mktemp 00:35:07.626 18:49:52 chaining -- bdev/chaining.sh@69 -- # input=/tmp/tmp.lAnsvLxos1 00:35:07.626 18:49:52 chaining -- bdev/chaining.sh@69 -- # mktemp 00:35:07.626 18:49:52 chaining -- bdev/chaining.sh@69 -- # output=/tmp/tmp.JeBzq63268 00:35:07.626 18:49:52 chaining -- bdev/chaining.sh@70 -- # trap 'tgtcleanup; exit 1' SIGINT SIGTERM EXIT 00:35:07.626 18:49:52 chaining -- bdev/chaining.sh@72 -- # rpc_cmd 00:35:07.626 18:49:52 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.626 18:49:52 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:07.626 malloc0 00:35:07.626 true 00:35:07.626 true 00:35:07.626 [2024-07-15 18:49:53.025786] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:35:07.626 crypto0 00:35:07.626 [2024-07-15 18:49:53.033813] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:35:07.626 crypto1 00:35:07.626 [2024-07-15 18:49:53.041978] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:07.626 [2024-07-15 18:49:53.058229] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:07.626 18:49:53 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.626 18:49:53 chaining -- bdev/chaining.sh@85 -- # update_stats 00:35:07.626 18:49:53 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:35:07.626 18:49:53 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:07.626 18:49:53 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:35:07.626 18:49:53 chaining -- bdev/chaining.sh@39 -- # opcode= 00:35:07.626 18:49:53 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:07.626 18:49:53 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:35:07.626 18:49:53 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:35:07.627 18:49:53 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:35:07.627 18:49:53 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.627 18:49:53 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:07.627 18:49:53 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.627 18:49:53 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=12 00:35:07.627 18:49:53 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:35:07.627 18:49:53 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:07.627 18:49:53 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:07.627 18:49:53 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:35:07.627 18:49:53 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:07.627 18:49:53 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:35:07.627 18:49:53 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:07.627 18:49:53 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:35:07.627 18:49:53 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.627 18:49:53 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:07.627 18:49:53 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.627 18:49:53 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]= 00:35:07.627 18:49:53 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:35:07.627 18:49:53 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:07.627 18:49:53 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:07.627 18:49:53 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:35:07.627 18:49:53 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:07.627 18:49:53 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:35:07.902 18:49:53 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:07.902 18:49:53 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:35:07.902 18:49:53 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.902 18:49:53 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:07.902 18:49:53 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.902 18:49:53 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=12 00:35:07.902 18:49:53 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:35:07.902 18:49:53 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:07.902 18:49:53 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:07.902 18:49:53 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:35:07.902 18:49:53 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:07.902 18:49:53 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:35:07.902 18:49:53 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:07.902 18:49:53 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.902 18:49:53 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:07.902 18:49:53 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:35:07.902 18:49:53 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.902 18:49:53 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:35:07.902 18:49:53 chaining -- bdev/chaining.sh@88 -- # dd if=/dev/urandom of=/tmp/tmp.lAnsvLxos1 bs=1K count=64 00:35:07.902 64+0 records in 00:35:07.902 64+0 records out 00:35:07.902 65536 bytes (66 kB, 64 KiB) copied, 0.000843485 s, 77.7 MB/s 00:35:07.902 18:49:53 chaining -- bdev/chaining.sh@89 -- # spdk_dd --if /tmp/tmp.lAnsvLxos1 --ob Nvme0n1 --bs 65536 --count 1 00:35:07.902 18:49:53 chaining -- bdev/chaining.sh@25 -- # local config 00:35:07.902 18:49:53 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:35:07.902 18:49:53 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:35:07.902 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:35:07.902 18:49:53 chaining -- bdev/chaining.sh@31 -- # config='{ 00:35:07.902 "subsystems": [ 00:35:07.902 { 00:35:07.902 "subsystem": "bdev", 00:35:07.902 "config": [ 00:35:07.902 { 00:35:07.902 "method": "bdev_nvme_attach_controller", 00:35:07.902 "params": { 00:35:07.902 "trtype": "tcp", 00:35:07.902 "adrfam": "IPv4", 00:35:07.902 "name": "Nvme0", 00:35:07.902 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:07.902 "traddr": "10.0.0.2", 00:35:07.902 "trsvcid": "4420" 00:35:07.902 } 00:35:07.902 }, 00:35:07.902 { 00:35:07.902 "method": "bdev_set_options", 00:35:07.902 "params": { 00:35:07.902 "bdev_auto_examine": false 00:35:07.902 } 00:35:07.902 } 00:35:07.902 ] 00:35:07.902 } 00:35:07.902 ] 00:35:07.902 }' 00:35:07.902 18:49:53 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /tmp/tmp.lAnsvLxos1 --ob Nvme0n1 --bs 65536 --count 1 00:35:07.902 18:49:53 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:35:07.902 "subsystems": [ 00:35:07.902 { 00:35:07.902 "subsystem": "bdev", 00:35:07.902 "config": [ 00:35:07.902 { 00:35:07.902 "method": "bdev_nvme_attach_controller", 00:35:07.902 "params": { 00:35:07.902 "trtype": "tcp", 00:35:07.902 "adrfam": "IPv4", 00:35:07.902 "name": "Nvme0", 00:35:07.902 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:07.902 "traddr": "10.0.0.2", 00:35:07.902 "trsvcid": "4420" 00:35:07.902 } 00:35:07.902 }, 00:35:07.902 { 00:35:07.902 "method": "bdev_set_options", 00:35:07.902 "params": { 00:35:07.902 "bdev_auto_examine": false 00:35:07.902 } 00:35:07.902 } 00:35:07.902 ] 00:35:07.902 } 00:35:07.902 ] 00:35:07.902 }' 00:35:07.902 [2024-07-15 18:49:53.382604] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:35:07.902 [2024-07-15 18:49:53.382660] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3014497 ] 00:35:08.159 [2024-07-15 18:49:53.479760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:08.159 [2024-07-15 18:49:53.570513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:08.418  Copying: 64/64 [kB] (average 31 MBps) 00:35:08.418 00:35:08.418 18:49:53 chaining -- bdev/chaining.sh@90 -- # get_stat sequence_executed 00:35:08.418 18:49:53 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:08.418 18:49:53 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:35:08.418 18:49:53 chaining -- bdev/chaining.sh@39 -- # opcode= 00:35:08.418 18:49:53 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:08.418 18:49:53 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:35:08.418 18:49:53 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:35:08.676 18:49:53 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:35:08.676 18:49:53 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.676 18:49:53 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:08.676 18:49:53 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@90 -- # (( 13 == stats[sequence_executed] + 1 )) 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@91 -- # get_stat executed encrypt 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:08.676 18:49:54 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.676 18:49:54 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:35:08.676 18:49:54 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@91 -- # (( 2 == stats[encrypt_executed] + 2 )) 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@92 -- # get_stat executed decrypt 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:35:08.676 18:49:54 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.676 18:49:54 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:08.676 18:49:54 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@92 -- # (( 12 == stats[decrypt_executed] )) 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@95 -- # get_stat executed copy 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:35:08.676 18:49:54 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.676 18:49:54 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:08.676 18:49:54 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@95 -- # (( 4 == stats[copy_executed] )) 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@96 -- # update_stats 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@39 -- # opcode= 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:35:08.676 18:49:54 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.676 18:49:54 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:08.676 18:49:54 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=13 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:08.676 18:49:54 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:35:08.676 18:49:54 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.676 18:49:54 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:08.933 18:49:54 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.933 18:49:54 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=2 00:35:08.933 18:49:54 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:35:08.933 18:49:54 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:08.933 18:49:54 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:08.933 18:49:54 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:35:08.933 18:49:54 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:08.933 18:49:54 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:35:08.933 18:49:54 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:08.933 18:49:54 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:35:08.933 18:49:54 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.933 18:49:54 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:08.933 18:49:54 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.933 18:49:54 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=12 00:35:08.933 18:49:54 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:35:08.933 18:49:54 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:08.933 18:49:54 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:08.933 18:49:54 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:35:08.933 18:49:54 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:08.933 18:49:54 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:35:08.933 18:49:54 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:08.933 18:49:54 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:35:08.933 18:49:54 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.933 18:49:54 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:08.933 18:49:54 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.933 18:49:54 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:35:08.933 18:49:54 chaining -- bdev/chaining.sh@99 -- # spdk_dd --of /tmp/tmp.JeBzq63268 --ib Nvme0n1 --bs 65536 --count 1 00:35:08.933 18:49:54 chaining -- bdev/chaining.sh@25 -- # local config 00:35:08.933 18:49:54 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:35:08.933 18:49:54 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:35:08.933 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:35:08.933 18:49:54 chaining -- bdev/chaining.sh@31 -- # config='{ 00:35:08.933 "subsystems": [ 00:35:08.933 { 00:35:08.933 "subsystem": "bdev", 00:35:08.933 "config": [ 00:35:08.933 { 00:35:08.933 "method": "bdev_nvme_attach_controller", 00:35:08.933 "params": { 00:35:08.933 "trtype": "tcp", 00:35:08.933 "adrfam": "IPv4", 00:35:08.933 "name": "Nvme0", 00:35:08.933 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:08.933 "traddr": "10.0.0.2", 00:35:08.933 "trsvcid": "4420" 00:35:08.933 } 00:35:08.933 }, 00:35:08.933 { 00:35:08.933 "method": "bdev_set_options", 00:35:08.933 "params": { 00:35:08.933 "bdev_auto_examine": false 00:35:08.933 } 00:35:08.933 } 00:35:08.933 ] 00:35:08.933 } 00:35:08.933 ] 00:35:08.933 }' 00:35:08.933 18:49:54 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --of /tmp/tmp.JeBzq63268 --ib Nvme0n1 --bs 65536 --count 1 00:35:08.933 18:49:54 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:35:08.933 "subsystems": [ 00:35:08.933 { 00:35:08.933 "subsystem": "bdev", 00:35:08.933 "config": [ 00:35:08.933 { 00:35:08.933 "method": "bdev_nvme_attach_controller", 00:35:08.933 "params": { 00:35:08.933 "trtype": "tcp", 00:35:08.933 "adrfam": "IPv4", 00:35:08.933 "name": "Nvme0", 00:35:08.933 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:08.933 "traddr": "10.0.0.2", 00:35:08.933 "trsvcid": "4420" 00:35:08.933 } 00:35:08.933 }, 00:35:08.933 { 00:35:08.933 "method": "bdev_set_options", 00:35:08.933 "params": { 00:35:08.933 "bdev_auto_examine": false 00:35:08.933 } 00:35:08.933 } 00:35:08.933 ] 00:35:08.933 } 00:35:08.933 ] 00:35:08.933 }' 00:35:08.933 [2024-07-15 18:49:54.472632] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:35:08.933 [2024-07-15 18:49:54.472692] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3014637 ] 00:35:09.191 [2024-07-15 18:49:54.573277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:09.191 [2024-07-15 18:49:54.664063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:09.706  Copying: 64/64 [kB] (average 20 MBps) 00:35:09.706 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@100 -- # get_stat sequence_executed 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@39 -- # opcode= 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:35:09.706 18:49:55 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.706 18:49:55 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:09.706 18:49:55 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@100 -- # (( 14 == stats[sequence_executed] + 1 )) 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@101 -- # get_stat executed encrypt 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:35:09.706 18:49:55 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.706 18:49:55 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:09.706 18:49:55 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@101 -- # (( 2 == stats[encrypt_executed] )) 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@102 -- # get_stat executed decrypt 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:35:09.706 18:49:55 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.706 18:49:55 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:09.706 18:49:55 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@102 -- # (( 14 == stats[decrypt_executed] + 2 )) 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@103 -- # get_stat executed copy 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:09.706 18:49:55 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.706 18:49:55 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:09.706 18:49:55 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:35:09.706 18:49:55 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.964 18:49:55 chaining -- bdev/chaining.sh@103 -- # (( 4 == stats[copy_executed] )) 00:35:09.964 18:49:55 chaining -- bdev/chaining.sh@104 -- # cmp /tmp/tmp.lAnsvLxos1 /tmp/tmp.JeBzq63268 00:35:09.964 18:49:55 chaining -- bdev/chaining.sh@105 -- # spdk_dd --if /dev/zero --ob Nvme0n1 --bs 65536 --count 1 00:35:09.964 18:49:55 chaining -- bdev/chaining.sh@25 -- # local config 00:35:09.964 18:49:55 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:35:09.964 18:49:55 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:35:09.964 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:35:09.964 18:49:55 chaining -- bdev/chaining.sh@31 -- # config='{ 00:35:09.964 "subsystems": [ 00:35:09.964 { 00:35:09.964 "subsystem": "bdev", 00:35:09.964 "config": [ 00:35:09.964 { 00:35:09.964 "method": "bdev_nvme_attach_controller", 00:35:09.964 "params": { 00:35:09.964 "trtype": "tcp", 00:35:09.964 "adrfam": "IPv4", 00:35:09.964 "name": "Nvme0", 00:35:09.964 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:09.964 "traddr": "10.0.0.2", 00:35:09.964 "trsvcid": "4420" 00:35:09.964 } 00:35:09.964 }, 00:35:09.965 { 00:35:09.965 "method": "bdev_set_options", 00:35:09.965 "params": { 00:35:09.965 "bdev_auto_examine": false 00:35:09.965 } 00:35:09.965 } 00:35:09.965 ] 00:35:09.965 } 00:35:09.965 ] 00:35:09.965 }' 00:35:09.965 18:49:55 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /dev/zero --ob Nvme0n1 --bs 65536 --count 1 00:35:09.965 18:49:55 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:35:09.965 "subsystems": [ 00:35:09.965 { 00:35:09.965 "subsystem": "bdev", 00:35:09.965 "config": [ 00:35:09.965 { 00:35:09.965 "method": "bdev_nvme_attach_controller", 00:35:09.965 "params": { 00:35:09.965 "trtype": "tcp", 00:35:09.965 "adrfam": "IPv4", 00:35:09.965 "name": "Nvme0", 00:35:09.965 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:09.965 "traddr": "10.0.0.2", 00:35:09.965 "trsvcid": "4420" 00:35:09.965 } 00:35:09.965 }, 00:35:09.965 { 00:35:09.965 "method": "bdev_set_options", 00:35:09.965 "params": { 00:35:09.965 "bdev_auto_examine": false 00:35:09.965 } 00:35:09.965 } 00:35:09.965 ] 00:35:09.965 } 00:35:09.965 ] 00:35:09.965 }' 00:35:09.965 [2024-07-15 18:49:55.376317] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:35:09.965 [2024-07-15 18:49:55.376359] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3014869 ] 00:35:09.965 [2024-07-15 18:49:55.461960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:10.223 [2024-07-15 18:49:55.552687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:10.482  Copying: 64/64 [kB] (average 20 MBps) 00:35:10.482 00:35:10.482 18:49:55 chaining -- bdev/chaining.sh@106 -- # update_stats 00:35:10.482 18:49:55 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:35:10.482 18:49:55 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:10.482 18:49:55 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:35:10.482 18:49:55 chaining -- bdev/chaining.sh@39 -- # opcode= 00:35:10.482 18:49:55 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:10.482 18:49:55 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:35:10.482 18:49:55 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:35:10.482 18:49:55 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:35:10.482 18:49:55 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.482 18:49:55 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:10.482 18:49:55 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.482 18:49:55 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=15 00:35:10.482 18:49:55 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:35:10.482 18:49:55 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:10.482 18:49:55 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:10.482 18:49:55 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:35:10.482 18:49:55 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:10.482 18:49:55 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:35:10.482 18:49:55 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:10.482 18:49:55 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:35:10.482 18:49:55 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.482 18:49:55 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:10.482 18:49:56 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.741 18:49:56 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=4 00:35:10.741 18:49:56 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:35:10.741 18:49:56 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:10.741 18:49:56 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:10.741 18:49:56 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:35:10.741 18:49:56 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:10.741 18:49:56 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:35:10.741 18:49:56 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:10.741 18:49:56 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:35:10.741 18:49:56 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.741 18:49:56 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:10.741 18:49:56 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.741 18:49:56 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=14 00:35:10.741 18:49:56 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:35:10.741 18:49:56 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:10.741 18:49:56 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:10.741 18:49:56 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:35:10.741 18:49:56 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:10.741 18:49:56 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:35:10.741 18:49:56 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:10.741 18:49:56 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:35:10.741 18:49:56 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.741 18:49:56 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:10.741 18:49:56 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.741 18:49:56 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:35:10.741 18:49:56 chaining -- bdev/chaining.sh@109 -- # spdk_dd --if /tmp/tmp.lAnsvLxos1 --ob Nvme0n1 --bs 4096 --count 16 00:35:10.741 18:49:56 chaining -- bdev/chaining.sh@25 -- # local config 00:35:10.741 18:49:56 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:35:10.741 18:49:56 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:35:10.741 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:35:10.741 18:49:56 chaining -- bdev/chaining.sh@31 -- # config='{ 00:35:10.741 "subsystems": [ 00:35:10.741 { 00:35:10.741 "subsystem": "bdev", 00:35:10.741 "config": [ 00:35:10.741 { 00:35:10.741 "method": "bdev_nvme_attach_controller", 00:35:10.741 "params": { 00:35:10.741 "trtype": "tcp", 00:35:10.741 "adrfam": "IPv4", 00:35:10.741 "name": "Nvme0", 00:35:10.741 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:10.741 "traddr": "10.0.0.2", 00:35:10.741 "trsvcid": "4420" 00:35:10.741 } 00:35:10.741 }, 00:35:10.741 { 00:35:10.741 "method": "bdev_set_options", 00:35:10.741 "params": { 00:35:10.741 "bdev_auto_examine": false 00:35:10.741 } 00:35:10.741 } 00:35:10.741 ] 00:35:10.741 } 00:35:10.741 ] 00:35:10.741 }' 00:35:10.741 18:49:56 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /tmp/tmp.lAnsvLxos1 --ob Nvme0n1 --bs 4096 --count 16 00:35:10.741 18:49:56 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:35:10.741 "subsystems": [ 00:35:10.741 { 00:35:10.741 "subsystem": "bdev", 00:35:10.741 "config": [ 00:35:10.741 { 00:35:10.741 "method": "bdev_nvme_attach_controller", 00:35:10.741 "params": { 00:35:10.741 "trtype": "tcp", 00:35:10.741 "adrfam": "IPv4", 00:35:10.741 "name": "Nvme0", 00:35:10.741 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:10.741 "traddr": "10.0.0.2", 00:35:10.741 "trsvcid": "4420" 00:35:10.741 } 00:35:10.741 }, 00:35:10.741 { 00:35:10.741 "method": "bdev_set_options", 00:35:10.741 "params": { 00:35:10.741 "bdev_auto_examine": false 00:35:10.741 } 00:35:10.741 } 00:35:10.741 ] 00:35:10.741 } 00:35:10.741 ] 00:35:10.741 }' 00:35:10.741 [2024-07-15 18:49:56.282671] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:35:10.741 [2024-07-15 18:49:56.282729] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3015020 ] 00:35:11.000 [2024-07-15 18:49:56.381257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:11.000 [2024-07-15 18:49:56.472099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:11.517  Copying: 64/64 [kB] (average 9142 kBps) 00:35:11.517 00:35:11.517 18:49:56 chaining -- bdev/chaining.sh@110 -- # get_stat sequence_executed 00:35:11.517 18:49:56 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:11.517 18:49:56 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:35:11.517 18:49:56 chaining -- bdev/chaining.sh@39 -- # opcode= 00:35:11.517 18:49:56 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:11.517 18:49:56 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:35:11.517 18:49:56 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:35:11.517 18:49:56 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:35:11.517 18:49:56 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.517 18:49:56 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:11.517 18:49:56 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.517 18:49:56 chaining -- bdev/chaining.sh@110 -- # (( 31 == stats[sequence_executed] + 16 )) 00:35:11.517 18:49:56 chaining -- bdev/chaining.sh@111 -- # get_stat executed encrypt 00:35:11.517 18:49:56 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:11.517 18:49:56 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:11.517 18:49:56 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:35:11.517 18:49:56 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:11.517 18:49:56 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:35:11.517 18:49:56 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:11.517 18:49:56 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:35:11.517 18:49:56 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.517 18:49:56 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:11.517 18:49:56 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.517 18:49:56 chaining -- bdev/chaining.sh@111 -- # (( 36 == stats[encrypt_executed] + 32 )) 00:35:11.517 18:49:56 chaining -- bdev/chaining.sh@112 -- # get_stat executed decrypt 00:35:11.517 18:49:56 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:11.517 18:49:56 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:11.517 18:49:56 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:35:11.517 18:49:56 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:11.517 18:49:56 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:35:11.517 18:49:56 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:11.517 18:49:56 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.517 18:49:56 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:11.517 18:49:56 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:35:11.517 18:49:56 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.517 18:49:57 chaining -- bdev/chaining.sh@112 -- # (( 14 == stats[decrypt_executed] )) 00:35:11.517 18:49:57 chaining -- bdev/chaining.sh@113 -- # get_stat executed copy 00:35:11.517 18:49:57 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:11.517 18:49:57 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:11.517 18:49:57 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:35:11.517 18:49:57 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:11.517 18:49:57 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:35:11.517 18:49:57 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:35:11.517 18:49:57 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:11.517 18:49:57 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.517 18:49:57 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:11.517 18:49:57 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.517 18:49:57 chaining -- bdev/chaining.sh@113 -- # (( 4 == stats[copy_executed] )) 00:35:11.517 18:49:57 chaining -- bdev/chaining.sh@114 -- # update_stats 00:35:11.517 18:49:57 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:35:11.517 18:49:57 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:11.517 18:49:57 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:35:11.517 18:49:57 chaining -- bdev/chaining.sh@39 -- # opcode= 00:35:11.517 18:49:57 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:11.517 18:49:57 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:35:11.517 18:49:57 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:35:11.517 18:49:57 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:35:11.517 18:49:57 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.517 18:49:57 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:11.776 18:49:57 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.776 18:49:57 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=31 00:35:11.776 18:49:57 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:35:11.776 18:49:57 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:11.776 18:49:57 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:11.776 18:49:57 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:35:11.776 18:49:57 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:11.776 18:49:57 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:35:11.776 18:49:57 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:11.776 18:49:57 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:35:11.776 18:49:57 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.776 18:49:57 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:11.776 18:49:57 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.776 18:49:57 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=36 00:35:11.776 18:49:57 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:35:11.776 18:49:57 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:11.776 18:49:57 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:11.776 18:49:57 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:35:11.776 18:49:57 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:11.776 18:49:57 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:35:11.776 18:49:57 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:35:11.776 18:49:57 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:11.776 18:49:57 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.776 18:49:57 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:11.776 18:49:57 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.776 18:49:57 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=14 00:35:11.776 18:49:57 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:35:11.776 18:49:57 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:11.776 18:49:57 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:11.776 18:49:57 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:35:11.776 18:49:57 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:11.776 18:49:57 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:35:11.776 18:49:57 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:11.776 18:49:57 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:35:11.776 18:49:57 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.776 18:49:57 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:11.776 18:49:57 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.777 18:49:57 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:35:11.777 18:49:57 chaining -- bdev/chaining.sh@117 -- # : 00:35:11.777 18:49:57 chaining -- bdev/chaining.sh@118 -- # spdk_dd --of /tmp/tmp.JeBzq63268 --ib Nvme0n1 --bs 4096 --count 16 00:35:11.777 18:49:57 chaining -- bdev/chaining.sh@25 -- # local config 00:35:11.777 18:49:57 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:35:11.777 18:49:57 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:35:11.777 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:35:11.777 18:49:57 chaining -- bdev/chaining.sh@31 -- # config='{ 00:35:11.777 "subsystems": [ 00:35:11.777 { 00:35:11.777 "subsystem": "bdev", 00:35:11.777 "config": [ 00:35:11.777 { 00:35:11.777 "method": "bdev_nvme_attach_controller", 00:35:11.777 "params": { 00:35:11.777 "trtype": "tcp", 00:35:11.777 "adrfam": "IPv4", 00:35:11.777 "name": "Nvme0", 00:35:11.777 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:11.777 "traddr": "10.0.0.2", 00:35:11.777 "trsvcid": "4420" 00:35:11.777 } 00:35:11.777 }, 00:35:11.777 { 00:35:11.777 "method": "bdev_set_options", 00:35:11.777 "params": { 00:35:11.777 "bdev_auto_examine": false 00:35:11.777 } 00:35:11.777 } 00:35:11.777 ] 00:35:11.777 } 00:35:11.777 ] 00:35:11.777 }' 00:35:11.777 18:49:57 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --of /tmp/tmp.JeBzq63268 --ib Nvme0n1 --bs 4096 --count 16 00:35:11.777 18:49:57 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:35:11.777 "subsystems": [ 00:35:11.777 { 00:35:11.777 "subsystem": "bdev", 00:35:11.777 "config": [ 00:35:11.777 { 00:35:11.777 "method": "bdev_nvme_attach_controller", 00:35:11.777 "params": { 00:35:11.777 "trtype": "tcp", 00:35:11.777 "adrfam": "IPv4", 00:35:11.777 "name": "Nvme0", 00:35:11.777 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:11.777 "traddr": "10.0.0.2", 00:35:11.777 "trsvcid": "4420" 00:35:11.777 } 00:35:11.777 }, 00:35:11.777 { 00:35:11.777 "method": "bdev_set_options", 00:35:11.777 "params": { 00:35:11.777 "bdev_auto_examine": false 00:35:11.777 } 00:35:11.777 } 00:35:11.777 ] 00:35:11.777 } 00:35:11.777 ] 00:35:11.777 }' 00:35:12.036 [2024-07-15 18:49:57.343807] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:35:12.036 [2024-07-15 18:49:57.343863] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3015147 ] 00:35:12.036 [2024-07-15 18:49:57.441433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:12.036 [2024-07-15 18:49:57.532121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:12.554  Copying: 64/64 [kB] (average 1333 kBps) 00:35:12.554 00:35:12.554 18:49:57 chaining -- bdev/chaining.sh@119 -- # get_stat sequence_executed 00:35:12.554 18:49:57 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:12.554 18:49:57 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:35:12.554 18:49:57 chaining -- bdev/chaining.sh@39 -- # opcode= 00:35:12.554 18:49:57 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:12.554 18:49:57 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:35:12.554 18:49:57 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:35:12.554 18:49:57 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:35:12.554 18:49:57 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.554 18:49:57 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:12.554 18:49:57 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.554 18:49:58 chaining -- bdev/chaining.sh@119 -- # (( 47 == stats[sequence_executed] + 16 )) 00:35:12.554 18:49:58 chaining -- bdev/chaining.sh@120 -- # get_stat executed encrypt 00:35:12.554 18:49:58 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:12.554 18:49:58 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:12.554 18:49:58 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:35:12.554 18:49:58 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:12.554 18:49:58 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:35:12.554 18:49:58 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:12.554 18:49:58 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:35:12.554 18:49:58 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.554 18:49:58 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:12.554 18:49:58 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.554 18:49:58 chaining -- bdev/chaining.sh@120 -- # (( 36 == stats[encrypt_executed] )) 00:35:12.554 18:49:58 chaining -- bdev/chaining.sh@121 -- # get_stat executed decrypt 00:35:12.554 18:49:58 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:12.554 18:49:58 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:12.554 18:49:58 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:35:12.554 18:49:58 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:12.554 18:49:58 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:35:12.554 18:49:58 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:12.554 18:49:58 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:35:12.554 18:49:58 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.554 18:49:58 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:12.554 18:49:58 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.814 18:49:58 chaining -- bdev/chaining.sh@121 -- # (( 46 == stats[decrypt_executed] + 32 )) 00:35:12.814 18:49:58 chaining -- bdev/chaining.sh@122 -- # get_stat executed copy 00:35:12.814 18:49:58 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:12.814 18:49:58 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:12.814 18:49:58 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:35:12.814 18:49:58 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:12.814 18:49:58 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:35:12.814 18:49:58 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:35:12.814 18:49:58 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:12.814 18:49:58 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.814 18:49:58 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:12.814 18:49:58 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.814 18:49:58 chaining -- bdev/chaining.sh@122 -- # (( 4 == stats[copy_executed] )) 00:35:12.814 18:49:58 chaining -- bdev/chaining.sh@123 -- # cmp /tmp/tmp.lAnsvLxos1 /tmp/tmp.JeBzq63268 00:35:12.814 18:49:58 chaining -- bdev/chaining.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:35:12.814 18:49:58 chaining -- bdev/chaining.sh@126 -- # tgtcleanup 00:35:12.814 18:49:58 chaining -- bdev/chaining.sh@58 -- # rm -f /tmp/tmp.lAnsvLxos1 /tmp/tmp.JeBzq63268 00:35:12.814 18:49:58 chaining -- bdev/chaining.sh@59 -- # nvmftestfini 00:35:12.814 18:49:58 chaining -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:12.814 18:49:58 chaining -- nvmf/common.sh@117 -- # sync 00:35:12.814 18:49:58 chaining -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:12.814 18:49:58 chaining -- nvmf/common.sh@120 -- # set +e 00:35:12.814 18:49:58 chaining -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:12.814 18:49:58 chaining -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:12.814 rmmod nvme_tcp 00:35:12.814 rmmod nvme_fabrics 00:35:12.814 rmmod nvme_keyring 00:35:12.814 18:49:58 chaining -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:12.814 18:49:58 chaining -- nvmf/common.sh@124 -- # set -e 00:35:12.814 18:49:58 chaining -- nvmf/common.sh@125 -- # return 0 00:35:12.814 18:49:58 chaining -- nvmf/common.sh@489 -- # '[' -n 3014239 ']' 00:35:12.814 18:49:58 chaining -- nvmf/common.sh@490 -- # killprocess 3014239 00:35:12.814 18:49:58 chaining -- common/autotest_common.sh@948 -- # '[' -z 3014239 ']' 00:35:12.814 18:49:58 chaining -- common/autotest_common.sh@952 -- # kill -0 3014239 00:35:12.814 18:49:58 chaining -- common/autotest_common.sh@953 -- # uname 00:35:12.814 18:49:58 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:12.814 18:49:58 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3014239 00:35:12.814 18:49:58 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:12.814 18:49:58 chaining -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:12.814 18:49:58 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3014239' 00:35:12.814 killing process with pid 3014239 00:35:12.814 18:49:58 chaining -- common/autotest_common.sh@967 -- # kill 3014239 00:35:12.814 18:49:58 chaining -- common/autotest_common.sh@972 -- # wait 3014239 00:35:13.074 18:49:58 chaining -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:13.074 18:49:58 chaining -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:13.074 18:49:58 chaining -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:13.074 18:49:58 chaining -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:13.074 18:49:58 chaining -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:13.074 18:49:58 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:13.074 18:49:58 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:13.074 18:49:58 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:13.333 18:49:58 chaining -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:35:13.333 18:49:58 chaining -- bdev/chaining.sh@129 -- # trap 'bperfcleanup; exit 1' SIGINT SIGTERM EXIT 00:35:13.333 18:49:58 chaining -- bdev/chaining.sh@132 -- # bperfpid=3015400 00:35:13.333 18:49:58 chaining -- bdev/chaining.sh@134 -- # waitforlisten 3015400 00:35:13.333 18:49:58 chaining -- bdev/chaining.sh@131 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:35:13.333 18:49:58 chaining -- common/autotest_common.sh@829 -- # '[' -z 3015400 ']' 00:35:13.333 18:49:58 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:13.333 18:49:58 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:13.333 18:49:58 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:13.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:13.333 18:49:58 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:13.333 18:49:58 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:13.333 [2024-07-15 18:49:58.711674] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:35:13.333 [2024-07-15 18:49:58.711740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3015400 ] 00:35:13.333 [2024-07-15 18:49:58.812069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:13.591 [2024-07-15 18:49:58.907924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:14.160 18:49:59 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:14.160 18:49:59 chaining -- common/autotest_common.sh@862 -- # return 0 00:35:14.160 18:49:59 chaining -- bdev/chaining.sh@135 -- # rpc_cmd 00:35:14.160 18:49:59 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.160 18:49:59 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:14.419 malloc0 00:35:14.419 true 00:35:14.419 true 00:35:14.419 [2024-07-15 18:49:59.800333] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:35:14.419 crypto0 00:35:14.419 [2024-07-15 18:49:59.808344] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:35:14.419 crypto1 00:35:14.419 18:49:59 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.419 18:49:59 chaining -- bdev/chaining.sh@145 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:35:14.419 Running I/O for 5 seconds... 00:35:19.749 00:35:19.749 Latency(us) 00:35:19.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:19.749 Job: crypto1 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:35:19.749 Verification LBA range: start 0x0 length 0x2000 00:35:19.749 crypto1 : 5.01 10426.63 40.73 0.00 0.00 24482.31 6959.30 15728.64 00:35:19.749 =================================================================================================================== 00:35:19.749 Total : 10426.63 40.73 0.00 0.00 24482.31 6959.30 15728.64 00:35:19.749 0 00:35:19.749 18:50:04 chaining -- bdev/chaining.sh@146 -- # killprocess 3015400 00:35:19.749 18:50:04 chaining -- common/autotest_common.sh@948 -- # '[' -z 3015400 ']' 00:35:19.749 18:50:04 chaining -- common/autotest_common.sh@952 -- # kill -0 3015400 00:35:19.749 18:50:04 chaining -- common/autotest_common.sh@953 -- # uname 00:35:19.749 18:50:04 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:19.749 18:50:04 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3015400 00:35:19.749 18:50:05 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:19.749 18:50:05 chaining -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:19.749 18:50:05 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3015400' 00:35:19.749 killing process with pid 3015400 00:35:19.749 18:50:05 chaining -- common/autotest_common.sh@967 -- # kill 3015400 00:35:19.749 Received shutdown signal, test time was about 5.000000 seconds 00:35:19.749 00:35:19.749 Latency(us) 00:35:19.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:19.749 =================================================================================================================== 00:35:19.749 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:19.749 18:50:05 chaining -- common/autotest_common.sh@972 -- # wait 3015400 00:35:19.749 18:50:05 chaining -- bdev/chaining.sh@152 -- # bperfpid=3016454 00:35:19.749 18:50:05 chaining -- bdev/chaining.sh@151 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:35:19.749 18:50:05 chaining -- bdev/chaining.sh@154 -- # waitforlisten 3016454 00:35:19.749 18:50:05 chaining -- common/autotest_common.sh@829 -- # '[' -z 3016454 ']' 00:35:19.749 18:50:05 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:19.749 18:50:05 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:19.749 18:50:05 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:19.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:19.749 18:50:05 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:19.749 18:50:05 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:19.749 [2024-07-15 18:50:05.268637] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:35:19.749 [2024-07-15 18:50:05.268697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3016454 ] 00:35:20.008 [2024-07-15 18:50:05.366671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:20.008 [2024-07-15 18:50:05.461042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:20.944 18:50:06 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:20.944 18:50:06 chaining -- common/autotest_common.sh@862 -- # return 0 00:35:20.944 18:50:06 chaining -- bdev/chaining.sh@155 -- # rpc_cmd 00:35:20.944 18:50:06 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.944 18:50:06 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:20.944 malloc0 00:35:20.944 true 00:35:20.944 true 00:35:20.944 [2024-07-15 18:50:06.364096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:35:20.944 [2024-07-15 18:50:06.364141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:20.944 [2024-07-15 18:50:06.364161] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18d0ee0 00:35:20.944 [2024-07-15 18:50:06.364170] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:20.944 [2024-07-15 18:50:06.365284] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:20.944 [2024-07-15 18:50:06.365309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:35:20.944 pt0 00:35:20.944 [2024-07-15 18:50:06.372129] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:35:20.944 crypto0 00:35:20.944 [2024-07-15 18:50:06.380148] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:35:20.944 crypto1 00:35:20.944 18:50:06 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.944 18:50:06 chaining -- bdev/chaining.sh@166 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:35:21.202 Running I/O for 5 seconds... 00:35:26.472 00:35:26.472 Latency(us) 00:35:26.472 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:26.472 Job: crypto1 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:35:26.472 Verification LBA range: start 0x0 length 0x2000 00:35:26.472 crypto1 : 5.01 8270.41 32.31 0.00 0.00 30865.05 7177.75 18474.91 00:35:26.472 =================================================================================================================== 00:35:26.472 Total : 8270.41 32.31 0.00 0.00 30865.05 7177.75 18474.91 00:35:26.472 0 00:35:26.472 18:50:11 chaining -- bdev/chaining.sh@167 -- # killprocess 3016454 00:35:26.472 18:50:11 chaining -- common/autotest_common.sh@948 -- # '[' -z 3016454 ']' 00:35:26.472 18:50:11 chaining -- common/autotest_common.sh@952 -- # kill -0 3016454 00:35:26.472 18:50:11 chaining -- common/autotest_common.sh@953 -- # uname 00:35:26.472 18:50:11 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:26.472 18:50:11 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3016454 00:35:26.472 18:50:11 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:26.472 18:50:11 chaining -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:26.472 18:50:11 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3016454' 00:35:26.472 killing process with pid 3016454 00:35:26.472 18:50:11 chaining -- common/autotest_common.sh@967 -- # kill 3016454 00:35:26.472 Received shutdown signal, test time was about 5.000000 seconds 00:35:26.472 00:35:26.472 Latency(us) 00:35:26.472 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:26.472 =================================================================================================================== 00:35:26.472 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:26.472 18:50:11 chaining -- common/autotest_common.sh@972 -- # wait 3016454 00:35:26.472 18:50:11 chaining -- bdev/chaining.sh@169 -- # trap - SIGINT SIGTERM EXIT 00:35:26.472 18:50:11 chaining -- bdev/chaining.sh@170 -- # killprocess 3016454 00:35:26.472 18:50:11 chaining -- common/autotest_common.sh@948 -- # '[' -z 3016454 ']' 00:35:26.472 18:50:11 chaining -- common/autotest_common.sh@952 -- # kill -0 3016454 00:35:26.472 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3016454) - No such process 00:35:26.472 18:50:11 chaining -- common/autotest_common.sh@975 -- # echo 'Process with pid 3016454 is not found' 00:35:26.472 Process with pid 3016454 is not found 00:35:26.472 18:50:11 chaining -- bdev/chaining.sh@171 -- # wait 3016454 00:35:26.472 18:50:11 chaining -- bdev/chaining.sh@175 -- # nvmftestinit 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:26.472 18:50:11 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:26.472 18:50:11 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@285 -- # xtrace_disable 00:35:26.472 18:50:11 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@291 -- # pci_devs=() 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@295 -- # net_devs=() 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@296 -- # e810=() 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@296 -- # local -ga e810 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@297 -- # x722=() 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@297 -- # local -ga x722 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@298 -- # mlx=() 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@298 -- # local -ga mlx 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@335 -- # (( 0 == 0 )) 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@336 -- # return 1 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@423 -- # [[ phy-fallback == phy ]] 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@426 -- # [[ phy-fallback == phy-fallback ]] 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@427 -- # echo 'WARNING: No supported devices were found, fallback requested for tcp test' 00:35:26.472 WARNING: No supported devices were found, fallback requested for tcp test 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@432 -- # nvmf_veth_init 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:35:26.472 Cannot find device "nvmf_tgt_br" 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@155 -- # true 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:35:26.472 Cannot find device "nvmf_tgt_br2" 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@156 -- # true 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:35:26.472 Cannot find device "nvmf_tgt_br" 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@158 -- # true 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:35:26.472 Cannot find device "nvmf_tgt_br2" 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@159 -- # true 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:26.472 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@162 -- # true 00:35:26.472 18:50:11 chaining -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:26.472 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:26.473 18:50:11 chaining -- nvmf/common.sh@163 -- # true 00:35:26.473 18:50:11 chaining -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:35:26.473 18:50:11 chaining -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:26.473 18:50:11 chaining -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:26.473 18:50:12 chaining -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:26.732 18:50:12 chaining -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:26.732 18:50:12 chaining -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:26.732 18:50:12 chaining -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:26.732 18:50:12 chaining -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:35:26.732 18:50:12 chaining -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:35:26.732 18:50:12 chaining -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:35:26.732 18:50:12 chaining -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:35:26.732 18:50:12 chaining -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:35:26.732 18:50:12 chaining -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:35:26.732 18:50:12 chaining -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:26.732 18:50:12 chaining -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:26.732 18:50:12 chaining -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:26.732 18:50:12 chaining -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:35:26.732 18:50:12 chaining -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:35:26.732 18:50:12 chaining -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:35:26.732 18:50:12 chaining -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:35:26.732 18:50:12 chaining -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:35:26.990 18:50:12 chaining -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:35:26.991 18:50:12 chaining -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:26.991 18:50:12 chaining -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:35:26.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:26.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:35:26.991 00:35:26.991 --- 10.0.0.2 ping statistics --- 00:35:26.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:26.991 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:35:26.991 18:50:12 chaining -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:35:26.991 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:26.991 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:35:26.991 00:35:26.991 --- 10.0.0.3 ping statistics --- 00:35:26.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:26.991 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:35:26.991 18:50:12 chaining -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:35:26.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:26.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:35:26.991 00:35:26.991 --- 10.0.0.1 ping statistics --- 00:35:26.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:26.991 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:35:26.991 18:50:12 chaining -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:26.991 18:50:12 chaining -- nvmf/common.sh@433 -- # return 0 00:35:26.991 18:50:12 chaining -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:26.991 18:50:12 chaining -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:26.991 18:50:12 chaining -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:26.991 18:50:12 chaining -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:26.991 18:50:12 chaining -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:26.991 18:50:12 chaining -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:26.991 18:50:12 chaining -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:26.991 18:50:12 chaining -- bdev/chaining.sh@176 -- # nvmfappstart -m 0x2 00:35:26.991 18:50:12 chaining -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:26.991 18:50:12 chaining -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:26.991 18:50:12 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:26.991 18:50:12 chaining -- nvmf/common.sh@481 -- # nvmfpid=3017810 00:35:26.991 18:50:12 chaining -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk ip netns exec nvmf_tgt_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:26.991 18:50:12 chaining -- nvmf/common.sh@482 -- # waitforlisten 3017810 00:35:26.991 18:50:12 chaining -- common/autotest_common.sh@829 -- # '[' -z 3017810 ']' 00:35:26.991 18:50:12 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:26.991 18:50:12 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:26.991 18:50:12 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:26.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:26.991 18:50:12 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:26.991 18:50:12 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:26.991 [2024-07-15 18:50:12.455338] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:35:26.991 [2024-07-15 18:50:12.455408] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:27.250 [2024-07-15 18:50:12.576400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:27.250 [2024-07-15 18:50:12.684605] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:27.250 [2024-07-15 18:50:12.684651] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:27.250 [2024-07-15 18:50:12.684664] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:27.250 [2024-07-15 18:50:12.684676] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:27.250 [2024-07-15 18:50:12.684686] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:27.250 [2024-07-15 18:50:12.684711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:28.186 18:50:13 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:28.186 18:50:13 chaining -- common/autotest_common.sh@862 -- # return 0 00:35:28.186 18:50:13 chaining -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:28.186 18:50:13 chaining -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:28.186 18:50:13 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:28.186 18:50:13 chaining -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:28.186 18:50:13 chaining -- bdev/chaining.sh@178 -- # rpc_cmd 00:35:28.186 18:50:13 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.186 18:50:13 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:28.186 malloc0 00:35:28.186 [2024-07-15 18:50:13.694516] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:28.186 [2024-07-15 18:50:13.710707] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:28.186 18:50:13 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.186 18:50:13 chaining -- bdev/chaining.sh@186 -- # trap 'bperfcleanup || :; nvmftestfini || :; exit 1' SIGINT SIGTERM EXIT 00:35:28.186 18:50:13 chaining -- bdev/chaining.sh@189 -- # bperfpid=3018043 00:35:28.186 18:50:13 chaining -- bdev/chaining.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bperf.sock -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:35:28.186 18:50:13 chaining -- bdev/chaining.sh@191 -- # waitforlisten 3018043 /var/tmp/bperf.sock 00:35:28.186 18:50:13 chaining -- common/autotest_common.sh@829 -- # '[' -z 3018043 ']' 00:35:28.186 18:50:13 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:28.186 18:50:13 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:28.186 18:50:13 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:28.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:28.186 18:50:13 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:28.186 18:50:13 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:28.445 [2024-07-15 18:50:13.780143] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:35:28.445 [2024-07-15 18:50:13.780199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3018043 ] 00:35:28.445 [2024-07-15 18:50:13.878015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:28.445 [2024-07-15 18:50:13.974268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:28.704 18:50:14 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:28.704 18:50:14 chaining -- common/autotest_common.sh@862 -- # return 0 00:35:28.704 18:50:14 chaining -- bdev/chaining.sh@192 -- # rpc_bperf 00:35:28.704 18:50:14 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock 00:35:29.272 [2024-07-15 18:50:14.642566] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:35:29.272 nvme0n1 00:35:29.272 true 00:35:29.272 crypto0 00:35:29.272 18:50:14 chaining -- bdev/chaining.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:29.530 Running I/O for 5 seconds... 00:35:34.817 00:35:34.817 Latency(us) 00:35:34.817 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:34.817 Job: crypto0 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:35:34.817 Verification LBA range: start 0x0 length 0x2000 00:35:34.817 crypto0 : 5.03 6218.42 24.29 0.00 0.00 41012.47 5461.33 29959.31 00:35:34.817 =================================================================================================================== 00:35:34.817 Total : 6218.42 24.29 0.00 0.00 41012.47 5461.33 29959.31 00:35:34.817 0 00:35:34.817 18:50:19 chaining -- bdev/chaining.sh@205 -- # get_stat_bperf sequence_executed 00:35:34.817 18:50:19 chaining -- bdev/chaining.sh@48 -- # get_stat sequence_executed '' rpc_bperf 00:35:34.817 18:50:19 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:34.817 18:50:19 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:35:34.817 18:50:19 chaining -- bdev/chaining.sh@39 -- # opcode= 00:35:34.817 18:50:19 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:35:34.817 18:50:19 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:35:34.817 18:50:19 chaining -- bdev/chaining.sh@41 -- # rpc_bperf accel_get_stats 00:35:34.817 18:50:19 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:35:34.817 18:50:19 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:34.817 18:50:20 chaining -- bdev/chaining.sh@205 -- # sequence=62558 00:35:34.817 18:50:20 chaining -- bdev/chaining.sh@206 -- # get_stat_bperf executed encrypt 00:35:34.817 18:50:20 chaining -- bdev/chaining.sh@48 -- # get_stat executed encrypt rpc_bperf 00:35:34.817 18:50:20 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:34.817 18:50:20 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:34.817 18:50:20 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:35:34.817 18:50:20 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:35:34.817 18:50:20 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:35:34.817 18:50:20 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:35:34.817 18:50:20 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:35:34.817 18:50:20 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:35.075 18:50:20 chaining -- bdev/chaining.sh@206 -- # encrypt=31279 00:35:35.075 18:50:20 chaining -- bdev/chaining.sh@207 -- # get_stat_bperf executed decrypt 00:35:35.075 18:50:20 chaining -- bdev/chaining.sh@48 -- # get_stat executed decrypt rpc_bperf 00:35:35.075 18:50:20 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:35.075 18:50:20 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:35.075 18:50:20 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:35:35.075 18:50:20 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:35:35.075 18:50:20 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:35:35.075 18:50:20 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:35:35.075 18:50:20 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:35:35.075 18:50:20 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:35.334 18:50:20 chaining -- bdev/chaining.sh@207 -- # decrypt=31279 00:35:35.334 18:50:20 chaining -- bdev/chaining.sh@208 -- # get_stat_bperf executed crc32c 00:35:35.334 18:50:20 chaining -- bdev/chaining.sh@48 -- # get_stat executed crc32c rpc_bperf 00:35:35.334 18:50:20 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:35.334 18:50:20 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:35.334 18:50:20 chaining -- bdev/chaining.sh@39 -- # opcode=crc32c 00:35:35.334 18:50:20 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:35:35.334 18:50:20 chaining -- bdev/chaining.sh@40 -- # [[ -z crc32c ]] 00:35:35.334 18:50:20 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:35:35.334 18:50:20 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:35.334 18:50:20 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "crc32c").executed' 00:35:35.592 18:50:21 chaining -- bdev/chaining.sh@208 -- # crc32c=62558 00:35:35.592 18:50:21 chaining -- bdev/chaining.sh@210 -- # (( sequence > 0 )) 00:35:35.592 18:50:21 chaining -- bdev/chaining.sh@211 -- # (( encrypt + decrypt == sequence )) 00:35:35.592 18:50:21 chaining -- bdev/chaining.sh@212 -- # (( encrypt + decrypt == crc32c )) 00:35:35.592 18:50:21 chaining -- bdev/chaining.sh@214 -- # killprocess 3018043 00:35:35.592 18:50:21 chaining -- common/autotest_common.sh@948 -- # '[' -z 3018043 ']' 00:35:35.592 18:50:21 chaining -- common/autotest_common.sh@952 -- # kill -0 3018043 00:35:35.592 18:50:21 chaining -- common/autotest_common.sh@953 -- # uname 00:35:35.592 18:50:21 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:35.592 18:50:21 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3018043 00:35:35.592 18:50:21 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:35.592 18:50:21 chaining -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:35.592 18:50:21 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3018043' 00:35:35.592 killing process with pid 3018043 00:35:35.592 18:50:21 chaining -- common/autotest_common.sh@967 -- # kill 3018043 00:35:35.592 Received shutdown signal, test time was about 5.000000 seconds 00:35:35.592 00:35:35.592 Latency(us) 00:35:35.592 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:35.592 =================================================================================================================== 00:35:35.592 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:35.592 18:50:21 chaining -- common/autotest_common.sh@972 -- # wait 3018043 00:35:35.851 18:50:21 chaining -- bdev/chaining.sh@219 -- # bperfpid=3019119 00:35:35.851 18:50:21 chaining -- bdev/chaining.sh@217 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bperf.sock -t 5 -w verify -o 65536 -q 32 --wait-for-rpc -z 00:35:35.851 18:50:21 chaining -- bdev/chaining.sh@221 -- # waitforlisten 3019119 /var/tmp/bperf.sock 00:35:35.851 18:50:21 chaining -- common/autotest_common.sh@829 -- # '[' -z 3019119 ']' 00:35:35.851 18:50:21 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:35.851 18:50:21 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:35.851 18:50:21 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:35.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:35.851 18:50:21 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:35.851 18:50:21 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:35.851 [2024-07-15 18:50:21.382662] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:35:35.851 [2024-07-15 18:50:21.382771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3019119 ] 00:35:36.109 [2024-07-15 18:50:21.521521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:36.109 [2024-07-15 18:50:21.615680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:37.045 18:50:22 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:37.045 18:50:22 chaining -- common/autotest_common.sh@862 -- # return 0 00:35:37.045 18:50:22 chaining -- bdev/chaining.sh@222 -- # rpc_bperf 00:35:37.045 18:50:22 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock 00:35:37.612 [2024-07-15 18:50:22.958691] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:35:37.612 nvme0n1 00:35:37.612 true 00:35:37.612 crypto0 00:35:37.612 18:50:22 chaining -- bdev/chaining.sh@231 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:37.612 Running I/O for 5 seconds... 00:35:42.971 00:35:42.971 Latency(us) 00:35:42.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:42.971 Job: crypto0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:35:42.971 Verification LBA range: start 0x0 length 0x200 00:35:42.971 crypto0 : 5.01 1528.14 95.51 0.00 0.00 20526.11 1490.16 22094.99 00:35:42.971 =================================================================================================================== 00:35:42.971 Total : 1528.14 95.51 0.00 0.00 20526.11 1490.16 22094.99 00:35:42.971 0 00:35:42.971 18:50:28 chaining -- bdev/chaining.sh@233 -- # get_stat_bperf sequence_executed 00:35:42.971 18:50:28 chaining -- bdev/chaining.sh@48 -- # get_stat sequence_executed '' rpc_bperf 00:35:42.971 18:50:28 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:42.971 18:50:28 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:35:42.971 18:50:28 chaining -- bdev/chaining.sh@39 -- # opcode= 00:35:42.971 18:50:28 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:35:42.971 18:50:28 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:35:42.971 18:50:28 chaining -- bdev/chaining.sh@41 -- # rpc_bperf accel_get_stats 00:35:42.971 18:50:28 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:35:42.971 18:50:28 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:42.971 18:50:28 chaining -- bdev/chaining.sh@233 -- # sequence=15300 00:35:42.971 18:50:28 chaining -- bdev/chaining.sh@234 -- # get_stat_bperf executed encrypt 00:35:42.971 18:50:28 chaining -- bdev/chaining.sh@48 -- # get_stat executed encrypt rpc_bperf 00:35:42.971 18:50:28 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:42.971 18:50:28 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:42.971 18:50:28 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:35:42.971 18:50:28 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:35:42.971 18:50:28 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:35:42.971 18:50:28 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:35:42.972 18:50:28 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:42.972 18:50:28 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:35:43.230 18:50:28 chaining -- bdev/chaining.sh@234 -- # encrypt=7650 00:35:43.230 18:50:28 chaining -- bdev/chaining.sh@235 -- # get_stat_bperf executed decrypt 00:35:43.230 18:50:28 chaining -- bdev/chaining.sh@48 -- # get_stat executed decrypt rpc_bperf 00:35:43.230 18:50:28 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:43.230 18:50:28 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:43.230 18:50:28 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:35:43.230 18:50:28 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:35:43.230 18:50:28 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:35:43.230 18:50:28 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:35:43.230 18:50:28 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:43.230 18:50:28 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:35:43.488 18:50:28 chaining -- bdev/chaining.sh@235 -- # decrypt=7650 00:35:43.488 18:50:28 chaining -- bdev/chaining.sh@236 -- # get_stat_bperf executed crc32c 00:35:43.488 18:50:28 chaining -- bdev/chaining.sh@48 -- # get_stat executed crc32c rpc_bperf 00:35:43.488 18:50:28 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:43.488 18:50:28 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:43.488 18:50:28 chaining -- bdev/chaining.sh@39 -- # opcode=crc32c 00:35:43.488 18:50:28 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:35:43.488 18:50:28 chaining -- bdev/chaining.sh@40 -- # [[ -z crc32c ]] 00:35:43.488 18:50:28 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:35:43.488 18:50:28 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "crc32c").executed' 00:35:43.488 18:50:28 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:43.746 18:50:29 chaining -- bdev/chaining.sh@236 -- # crc32c=15300 00:35:43.746 18:50:29 chaining -- bdev/chaining.sh@238 -- # (( sequence > 0 )) 00:35:43.746 18:50:29 chaining -- bdev/chaining.sh@239 -- # (( encrypt + decrypt == sequence )) 00:35:43.746 18:50:29 chaining -- bdev/chaining.sh@240 -- # (( encrypt + decrypt == crc32c )) 00:35:43.746 18:50:29 chaining -- bdev/chaining.sh@242 -- # killprocess 3019119 00:35:43.746 18:50:29 chaining -- common/autotest_common.sh@948 -- # '[' -z 3019119 ']' 00:35:43.746 18:50:29 chaining -- common/autotest_common.sh@952 -- # kill -0 3019119 00:35:43.746 18:50:29 chaining -- common/autotest_common.sh@953 -- # uname 00:35:43.746 18:50:29 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:43.746 18:50:29 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3019119 00:35:43.746 18:50:29 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:43.746 18:50:29 chaining -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:43.746 18:50:29 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3019119' 00:35:43.746 killing process with pid 3019119 00:35:43.746 18:50:29 chaining -- common/autotest_common.sh@967 -- # kill 3019119 00:35:43.746 Received shutdown signal, test time was about 5.000000 seconds 00:35:43.746 00:35:43.746 Latency(us) 00:35:43.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:43.746 =================================================================================================================== 00:35:43.746 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:43.746 18:50:29 chaining -- common/autotest_common.sh@972 -- # wait 3019119 00:35:44.005 18:50:29 chaining -- bdev/chaining.sh@243 -- # nvmftestfini 00:35:44.005 18:50:29 chaining -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:44.005 18:50:29 chaining -- nvmf/common.sh@117 -- # sync 00:35:44.005 18:50:29 chaining -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:44.005 18:50:29 chaining -- nvmf/common.sh@120 -- # set +e 00:35:44.005 18:50:29 chaining -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:44.005 18:50:29 chaining -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:44.005 rmmod nvme_tcp 00:35:44.005 rmmod nvme_fabrics 00:35:44.005 rmmod nvme_keyring 00:35:44.005 18:50:29 chaining -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:44.005 18:50:29 chaining -- nvmf/common.sh@124 -- # set -e 00:35:44.005 18:50:29 chaining -- nvmf/common.sh@125 -- # return 0 00:35:44.005 18:50:29 chaining -- nvmf/common.sh@489 -- # '[' -n 3017810 ']' 00:35:44.005 18:50:29 chaining -- nvmf/common.sh@490 -- # killprocess 3017810 00:35:44.005 18:50:29 chaining -- common/autotest_common.sh@948 -- # '[' -z 3017810 ']' 00:35:44.005 18:50:29 chaining -- common/autotest_common.sh@952 -- # kill -0 3017810 00:35:44.005 18:50:29 chaining -- common/autotest_common.sh@953 -- # uname 00:35:44.005 18:50:29 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:44.005 18:50:29 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3017810 00:35:44.263 18:50:29 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:44.263 18:50:29 chaining -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:44.263 18:50:29 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3017810' 00:35:44.263 killing process with pid 3017810 00:35:44.264 18:50:29 chaining -- common/autotest_common.sh@967 -- # kill 3017810 00:35:44.264 18:50:29 chaining -- common/autotest_common.sh@972 -- # wait 3017810 00:35:44.523 18:50:29 chaining -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:44.523 18:50:29 chaining -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:44.523 18:50:29 chaining -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:44.523 18:50:29 chaining -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:44.523 18:50:29 chaining -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:44.523 18:50:29 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:44.523 18:50:29 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:44.523 18:50:29 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:44.523 18:50:29 chaining -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:35:44.523 18:50:29 chaining -- bdev/chaining.sh@245 -- # trap - SIGINT SIGTERM EXIT 00:35:44.523 00:35:44.523 real 0m44.889s 00:35:44.523 user 1m1.456s 00:35:44.523 sys 0m10.941s 00:35:44.523 18:50:29 chaining -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:44.523 18:50:29 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:44.523 ************************************ 00:35:44.523 END TEST chaining 00:35:44.523 ************************************ 00:35:44.523 18:50:29 -- common/autotest_common.sh@1142 -- # return 0 00:35:44.523 18:50:29 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:35:44.523 18:50:29 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:35:44.523 18:50:29 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:35:44.523 18:50:29 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:35:44.523 18:50:29 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:35:44.523 18:50:29 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:35:44.523 18:50:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:44.523 18:50:29 -- common/autotest_common.sh@10 -- # set +x 00:35:44.523 18:50:29 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:35:44.523 18:50:29 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:35:44.523 18:50:29 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:35:44.523 18:50:29 -- common/autotest_common.sh@10 -- # set +x 00:35:48.715 INFO: APP EXITING 00:35:48.715 INFO: killing all VMs 00:35:48.715 INFO: killing vhost app 00:35:48.715 INFO: EXIT DONE 00:35:52.004 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:35:52.004 Waiting for block devices as requested 00:35:52.263 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:52.263 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:52.263 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:52.523 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:52.523 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:52.523 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:52.523 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:52.782 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:52.782 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:52.782 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:53.041 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:53.041 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:53.041 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:53.041 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:53.300 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:53.300 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:53.300 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:56.591 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:35:56.850 Cleaning 00:35:56.850 Removing: /var/run/dpdk/spdk0/config 00:35:56.850 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:56.850 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:56.850 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:56.850 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:56.850 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:56.850 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:56.850 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:56.850 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:56.850 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:56.850 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:56.850 Removing: /dev/shm/nvmf_trace.0 00:35:56.850 Removing: /dev/shm/spdk_tgt_trace.pid2707649 00:35:56.850 Removing: /var/run/dpdk/spdk0 00:35:56.850 Removing: /var/run/dpdk/spdk_pid2704046 00:35:56.850 Removing: /var/run/dpdk/spdk_pid2706461 00:35:56.850 Removing: /var/run/dpdk/spdk_pid2707649 00:35:56.850 Removing: /var/run/dpdk/spdk_pid2708268 00:35:56.850 Removing: /var/run/dpdk/spdk_pid2709167 00:35:56.850 Removing: /var/run/dpdk/spdk_pid2709408 00:35:56.850 Removing: /var/run/dpdk/spdk_pid2710329 00:35:56.850 Removing: /var/run/dpdk/spdk_pid2710549 00:35:56.850 Removing: /var/run/dpdk/spdk_pid2710777 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2713854 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2715726 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2716083 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2716384 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2716869 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2717155 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2717558 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2718009 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2718281 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2719196 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2722320 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2722567 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2722921 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2723226 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2723256 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2723524 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2723759 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2723994 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2724232 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2724471 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2724709 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2724946 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2725244 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2725566 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2725862 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2726096 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2726336 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2726571 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2726809 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2727046 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2727304 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2727639 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2727967 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2728205 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2728448 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2728677 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2728921 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2729364 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2729605 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2730050 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2730295 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2730728 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2730987 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2731403 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2731503 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2731890 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2732349 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2732803 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2733036 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2738040 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2740171 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2742196 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2743288 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2744690 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2745058 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2745099 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2745486 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2750870 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2751501 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2752612 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2752970 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2759425 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2761473 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2762496 00:35:57.109 Removing: /var/run/dpdk/spdk_pid2768085 00:35:57.110 Removing: /var/run/dpdk/spdk_pid2770413 00:35:57.110 Removing: /var/run/dpdk/spdk_pid2771370 00:35:57.110 Removing: /var/run/dpdk/spdk_pid2776596 00:35:57.110 Removing: /var/run/dpdk/spdk_pid2779470 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2780723 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2792872 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2795523 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2796688 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2808433 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2811061 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2812349 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2824988 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2828928 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2830243 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2843968 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2847302 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2848629 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2862420 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2865338 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2866893 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2881187 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2885947 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2887117 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2888902 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2892751 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2899170 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2902382 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2908309 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2912287 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2918689 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2922121 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2929828 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2932899 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2940888 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2943757 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2951767 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2954543 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2959676 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2960071 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2960425 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2960780 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2961372 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2962179 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2962954 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2963264 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2965295 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2967639 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2969556 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2971005 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2973068 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2975006 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2976935 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2978492 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2979146 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2979587 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2982198 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2984414 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2986472 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2987717 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2989207 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2989848 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2989886 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2989948 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2990377 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2990413 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2991866 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2993681 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2995826 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2996735 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2997741 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2997976 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2997995 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2998108 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2999138 00:35:57.368 Removing: /var/run/dpdk/spdk_pid2999794 00:35:57.368 Removing: /var/run/dpdk/spdk_pid3000235 00:35:57.368 Removing: /var/run/dpdk/spdk_pid3002804 00:35:57.368 Removing: /var/run/dpdk/spdk_pid3005039 00:35:57.628 Removing: /var/run/dpdk/spdk_pid3007066 00:35:57.628 Removing: /var/run/dpdk/spdk_pid3008343 00:35:57.628 Removing: /var/run/dpdk/spdk_pid3009646 00:35:57.628 Removing: /var/run/dpdk/spdk_pid3010289 00:35:57.628 Removing: /var/run/dpdk/spdk_pid3010394 00:35:57.628 Removing: /var/run/dpdk/spdk_pid3014497 00:35:57.628 Removing: /var/run/dpdk/spdk_pid3014637 00:35:57.628 Removing: /var/run/dpdk/spdk_pid3014869 00:35:57.628 Removing: /var/run/dpdk/spdk_pid3015020 00:35:57.628 Removing: /var/run/dpdk/spdk_pid3015147 00:35:57.628 Removing: /var/run/dpdk/spdk_pid3015400 00:35:57.628 Removing: /var/run/dpdk/spdk_pid3016454 00:35:57.628 Removing: /var/run/dpdk/spdk_pid3018043 00:35:57.628 Removing: /var/run/dpdk/spdk_pid3019119 00:35:57.628 Clean 00:35:57.628 18:50:43 -- common/autotest_common.sh@1451 -- # return 0 00:35:57.628 18:50:43 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:35:57.628 18:50:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:57.628 18:50:43 -- common/autotest_common.sh@10 -- # set +x 00:35:57.628 18:50:43 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:35:57.628 18:50:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:57.628 18:50:43 -- common/autotest_common.sh@10 -- # set +x 00:35:57.628 18:50:43 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/timing.txt 00:35:57.628 18:50:43 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/udev.log ]] 00:35:57.628 18:50:43 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/udev.log 00:35:57.628 18:50:43 -- spdk/autotest.sh@391 -- # hash lcov 00:35:57.628 18:50:43 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:35:57.628 18:50:43 -- spdk/autotest.sh@393 -- # hostname 00:35:57.628 18:50:43 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/crypto-phy-autotest/spdk -t spdk-wfp-12 -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_test.info 00:35:57.885 geninfo: WARNING: invalid characters removed from testname! 00:36:30.022 18:51:11 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:36:30.022 18:51:15 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:36:33.309 18:51:18 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:36:35.843 18:51:21 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:36:39.151 18:51:24 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:36:41.686 18:51:26 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:36:44.967 18:51:29 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:44.967 18:51:29 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:36:44.967 18:51:29 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:36:44.967 18:51:29 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:44.967 18:51:29 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:44.967 18:51:29 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.967 18:51:29 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.967 18:51:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.967 18:51:29 -- paths/export.sh@5 -- $ export PATH 00:36:44.967 18:51:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.967 18:51:29 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:36:44.967 18:51:29 -- common/autobuild_common.sh@444 -- $ date +%s 00:36:44.967 18:51:29 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721062289.XXXXXX 00:36:44.967 18:51:29 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721062289.TElBWz 00:36:44.967 18:51:29 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:36:44.967 18:51:29 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:36:44.967 18:51:29 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/' 00:36:44.967 18:51:29 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:36:44.967 18:51:29 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:36:44.967 18:51:29 -- common/autobuild_common.sh@460 -- $ get_config_params 00:36:44.967 18:51:29 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:36:44.967 18:51:29 -- common/autotest_common.sh@10 -- $ set +x 00:36:44.967 18:51:29 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk' 00:36:44.967 18:51:29 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:36:44.967 18:51:29 -- pm/common@17 -- $ local monitor 00:36:44.967 18:51:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:44.967 18:51:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:44.967 18:51:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:44.967 18:51:29 -- pm/common@21 -- $ date +%s 00:36:44.967 18:51:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:44.967 18:51:29 -- pm/common@21 -- $ date +%s 00:36:44.967 18:51:29 -- pm/common@25 -- $ sleep 1 00:36:44.967 18:51:29 -- pm/common@21 -- $ date +%s 00:36:44.967 18:51:29 -- pm/common@21 -- $ date +%s 00:36:44.967 18:51:29 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721062289 00:36:44.967 18:51:29 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721062289 00:36:44.967 18:51:29 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721062289 00:36:44.967 18:51:29 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721062289 00:36:44.967 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721062289_collect-vmstat.pm.log 00:36:44.967 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721062289_collect-cpu-load.pm.log 00:36:44.967 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721062289_collect-cpu-temp.pm.log 00:36:44.967 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721062289_collect-bmc-pm.bmc.pm.log 00:36:45.533 18:51:30 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:36:45.534 18:51:30 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j88 00:36:45.534 18:51:30 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:36:45.534 18:51:30 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:36:45.534 18:51:30 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:36:45.534 18:51:30 -- spdk/autopackage.sh@19 -- $ timing_finish 00:36:45.534 18:51:30 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:45.534 18:51:30 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:36:45.534 18:51:30 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/timing.txt 00:36:45.534 18:51:30 -- spdk/autopackage.sh@20 -- $ exit 0 00:36:45.534 18:51:30 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:36:45.534 18:51:30 -- pm/common@29 -- $ signal_monitor_resources TERM 00:36:45.534 18:51:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:36:45.534 18:51:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:45.534 18:51:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:36:45.534 18:51:30 -- pm/common@44 -- $ pid=3027497 00:36:45.534 18:51:30 -- pm/common@50 -- $ kill -TERM 3027497 00:36:45.534 18:51:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:45.534 18:51:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:36:45.534 18:51:30 -- pm/common@44 -- $ pid=3027499 00:36:45.534 18:51:31 -- pm/common@50 -- $ kill -TERM 3027499 00:36:45.534 18:51:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:45.534 18:51:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:36:45.534 18:51:31 -- pm/common@44 -- $ pid=3027501 00:36:45.534 18:51:31 -- pm/common@50 -- $ kill -TERM 3027501 00:36:45.534 18:51:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:45.534 18:51:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:36:45.534 18:51:31 -- pm/common@44 -- $ pid=3027525 00:36:45.534 18:51:31 -- pm/common@50 -- $ sudo -E kill -TERM 3027525 00:36:45.534 + [[ -n 2582729 ]] 00:36:45.534 + sudo kill 2582729 00:36:45.545 [Pipeline] } 00:36:45.565 [Pipeline] // stage 00:36:45.570 [Pipeline] } 00:36:45.590 [Pipeline] // timeout 00:36:45.595 [Pipeline] } 00:36:45.614 [Pipeline] // catchError 00:36:45.620 [Pipeline] } 00:36:45.641 [Pipeline] // wrap 00:36:45.649 [Pipeline] } 00:36:45.668 [Pipeline] // catchError 00:36:45.676 [Pipeline] stage 00:36:45.678 [Pipeline] { (Epilogue) 00:36:45.693 [Pipeline] catchError 00:36:45.695 [Pipeline] { 00:36:45.709 [Pipeline] echo 00:36:45.711 Cleanup processes 00:36:45.717 [Pipeline] sh 00:36:45.999 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:36:46.000 3027612 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/sdr.cache 00:36:46.000 3027876 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:36:46.013 [Pipeline] sh 00:36:46.296 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:36:46.296 ++ grep -v 'sudo pgrep' 00:36:46.296 ++ awk '{print $1}' 00:36:46.296 + sudo kill -9 3027612 00:36:46.308 [Pipeline] sh 00:36:46.645 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:01.530 [Pipeline] sh 00:37:01.813 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:01.813 Artifacts sizes are good 00:37:01.827 [Pipeline] archiveArtifacts 00:37:01.835 Archiving artifacts 00:37:02.012 [Pipeline] sh 00:37:02.296 + sudo chown -R sys_sgci /var/jenkins/workspace/crypto-phy-autotest 00:37:02.311 [Pipeline] cleanWs 00:37:02.323 [WS-CLEANUP] Deleting project workspace... 00:37:02.324 [WS-CLEANUP] Deferred wipeout is used... 00:37:02.337 [WS-CLEANUP] done 00:37:02.343 [Pipeline] } 00:37:02.384 [Pipeline] // catchError 00:37:02.393 [Pipeline] sh 00:37:02.667 + logger -p user.info -t JENKINS-CI 00:37:02.675 [Pipeline] } 00:37:02.692 [Pipeline] // stage 00:37:02.697 [Pipeline] } 00:37:02.712 [Pipeline] // node 00:37:02.718 [Pipeline] End of Pipeline 00:37:02.752 Finished: SUCCESS